Discussion:
[blfs-support] CPU % significantly over 100 ?
Ken Moffat
2018-04-08 23:39:46 UTC
Permalink
This is the first of a pair of posts which probably show how out of
my depth I am ;)

While watching rust build/test, I've been running top. My past
experience suggests that a process will normally max out at 100%
(i.e. all of _one_ core), but with rust I'm seeing percentages of
250-300%. Is that normal ?

ĸen
--
In my seventh decade astride this planet, and as my own cells degrade,
there are some things I cannot do now: skydiving, marathon running,
calculus. I couldn't do them in my 20s either, so no big loss.
-- Derek Smalls, formerly of Spinal Tap
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Ken Moffat
2018-04-09 00:09:27 UTC
Permalink
Post by Ken Moffat
This is the first of a pair of posts which probably show how out of
my depth I am ;)
While watching rust build/test, I've been running top. My past
experience suggests that a process will normally max out at 100%
(i.e. all of _one_ core), but with rust I'm seeing percentages of
250-300%. Is that normal ?
ĸen
In fact the latter part of the tests hit 400% (all of all cores) for
one rust job a few minutes before it crashed.

I can recall occasionally seeing figures like 102% on "normal" jobs,
but I had assumed those were rounding errors.

ĸen
--
In my seventh decade astride this planet, and as my own cells degrade,
there are some things I cannot do now: skydiving, marathon running,
calculus. I couldn't do them in my 20s either, so no big loss.
-- Derek Smalls, formerly of Spinal Tap
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above
Ken Moffat
2018-04-09 00:24:28 UTC
Permalink
Post by Ken Moffat
Post by Ken Moffat
This is the first of a pair of posts which probably show how out of
my depth I am ;)
While watching rust build/test, I've been running top. My past
experience suggests that a process will normally max out at 100%
(i.e. all of _one_ core), but with rust I'm seeing percentages of
250-300%. Is that normal ?
In fact the latter part of the tests hit 400% (all of all cores) for
one rust job a few minutes before it crashed.
I can recall occasionally seeing figures like 102% on "normal" jobs,
but I had assumed those were rounding errors.
Please forget the question, I've just started running mprime for a
stress test and that too is hitting 400% (deliberately using all 4
cores) so it isn't only rust that can do this. I still don't recall
ever noticing this sort of CPU % before, but it clearly isn't a
rust-specific feature.

ĸen
--
In my seventh decade astride this planet, and as my own cells degrade,
there are some things I cannot do now: skydiving, marathon running,
calculus. I couldn't do them in my 20s either, so no big loss.
-- Derek Smalls, formerly of Spinal Tap
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.ht
Bruce Dubbs
2018-04-09 01:01:43 UTC
Permalink
Post by Ken Moffat
This is the first of a pair of posts which probably show how out of
my depth I am ;)
While watching rust build/test, I've been running top. My past
experience suggests that a process will normally max out at 100%
(i.e. all of _one_ core), but with rust I'm seeing percentages of
250-300%. Is that normal ?
Do you know that for top pressing 1 (one) will show individual cores?
Pressing t will make them into a pseudo graphical (curses) display.
I've seen a load of over 13 for some long builds using ninja.

-- Bruce
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/f
Ken Moffat
2018-04-09 01:21:03 UTC
Permalink
Post by Bruce Dubbs
Post by Ken Moffat
This is the first of a pair of posts which probably show how out of
my depth I am ;)
While watching rust build/test, I've been running top. My past
experience suggests that a process will normally max out at 100%
(i.e. all of _one_ core), but with rust I'm seeing percentages of
250-300%. Is that normal ?
Do you know that for top pressing 1 (one) will show individual cores?
Pressing t will make them into a pseudo graphical (curses) display. I've
seen a load of over 13 for some long builds using ninja.
-- Bruce
I normally have top set to show loadavg, whatever is on the next
line, a %Cpu line for each core, with a line of '|' after the
percentage, memory, swap, and then the old-style details (running
processes rather than trees of threads).

But I think you are talking about loadavg on your 12 core machine ?
I was seeing loadavgs of 4 or 5 from time to time while rust was
running.

The diference here is that I'm used to individual processes each
going up to 100%, but here the main process is/was running at 400%
(only 4 cores on this machine).

ĸen
--
In my seventh decade astride this planet, and as my own cells degrade,
there are some things I cannot do now: skydiving, marathon running,
calculus. I couldn't do them in my 20s either, so no big loss.
-- Derek Smalls, formerly of Spinal Tap
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe
Bruce Dubbs
2018-04-09 01:39:01 UTC
Permalink
Post by Ken Moffat
Post by Bruce Dubbs
Post by Ken Moffat
While watching rust build/test, I've been running top. My past
experience suggests that a process will normally max out at 100%
(i.e. all of _one_ core), but with rust I'm seeing percentages of
250-300%. Is that normal ?
Do you know that for top pressing 1 (one) will show individual cores?
Pressing t will make them into a pseudo graphical (curses) display. I've
seen a load of over 13 for some long builds using ninja.
I normally have top set to show loadavg, whatever is on the next
line, a %Cpu line for each core, with a line of '|' after the
percentage, memory, swap, and then the old-style details (running
processes rather than trees of threads).
But I think you are talking about loadavg on your 12 core machine ?
I was seeing loadavgs of 4 or 5 from time to time while rust was
running.
The diference here is that I'm used to individual processes each
going up to 100%, but here the main process is/was running at 400%
(only 4 cores on this machine).
I'm not sure what you mean by 'the main process'. What I mean is:

top - 20:33:33 up 26 days, 1:53, 4 users, load average: 0.02, 0.01, 0.00
Tasks: 224 total, 1 running, 131 sleeping, 0 stopped, 0 zombie
%Cpu0 : 0.7/0.0 1[ ]
%Cpu1 : 0.0/0.0 0[ ]
%Cpu2 : 0.0/0.0 0[ ]
%Cpu3 : 0.0/0.0 0[ ]
%Cpu4 : 0.0/0.0 0[ ]
%Cpu5 : 0.0/0.0 0[ ]
%Cpu6 : 0.0/0.0 0[ ]
%Cpu7 : 0.0/0.0 0[ ]
%Cpu8 : 0.0/0.0 0[ ]
%Cpu9 : 0.0/0.0 0[ ]
%Cpu10 : 0.0/0.0 0[ ]
%Cpu11 : 0.0/0.0 0[ ]
MiB Mem : 2.7/15943.78+[ ]
MiB Swap: 0.0/20479.99+[ ]

Of course that is idle right now, but I have seen 14 on the top line
where is says load average: (Last minute, Last 5 minutes, last 15
minutes). And yes, I've seen over 12 for that last number.

-- Bruce
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscr
Ken Moffat
2018-04-09 15:43:49 UTC
Permalink
Post by Bruce Dubbs
Post by Ken Moffat
The diference here is that I'm used to individual processes each
going up to 100%, but here the main process is/was running at 400%
(only 4 cores on this machine).
top - 20:33:33 up 26 days, 1:53, 4 users, load average: 0.02, 0.01, 0.00
Tasks: 224 total, 1 running, 131 sleeping, 0 stopped, 0 zombie
%Cpu0 : 0.7/0.0 1[ ]
%Cpu1 : 0.0/0.0 0[ ]
%Cpu2 : 0.0/0.0 0[ ]
%Cpu3 : 0.0/0.0 0[ ]
%Cpu4 : 0.0/0.0 0[ ]
%Cpu5 : 0.0/0.0 0[ ]
%Cpu6 : 0.0/0.0 0[ ]
%Cpu7 : 0.0/0.0 0[ ]
%Cpu8 : 0.0/0.0 0[ ]
%Cpu9 : 0.0/0.0 0[ ]
%Cpu10 : 0.0/0.0 0[ ]
%Cpu11 : 0.0/0.0 0[ ]
MiB Mem : 2.7/15943.78+[ ]
MiB Swap: 0.0/20479.99+[ ]
Of course that is idle right now, but I have seen 14 on the top line where
is says load average: (Last minute, Last 5 minutes, last 15 minutes). And
yes, I've seen over 12 for that last number.
I meant the bit underneath that - I only have 4 cores (8 on my
haswell) and I use 40 line terms so lots of room for details, here's
a quick copy of the first few processes on an idle desktop:

PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
1321 root 20 0 370.6m 55.4m 1.3 0.7 0:35.85 S Xorg
22670 ken 20 0 3199.1m 146.4m 0.7 1.8 0:06.61 S falkon
22733 ken 20 0 2276.9m 348.6m 0.7 4.4 0:21.15 S QtWebEngineProc
1 root 20 0 4.2m 1.4m 0.0 0.0 0:00.40 S init

At that moment Xorg was using most cpu (1.3%).

ĸen
--
In my seventh decade astride this planet, and as my own cells degrade,
there are some things I cannot do now: skydiving, marathon running,
calculus. I couldn't do them in my 20s either, so no big loss.
-- Derek Smalls, formerly of Spinal Tap
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsub
Bruce Dubbs
2018-04-09 22:36:43 UTC
Permalink
Post by Ken Moffat
I meant the bit underneath that - I only have 4 cores (8 on my
haswell) and I use 40 line terms so lots of room for details, here's
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
1321 root 20 0 370.6m 55.4m 1.3 0.7 0:35.85 S Xorg
22670 ken 20 0 3199.1m 146.4m 0.7 1.8 0:06.61 S falkon
22733 ken 20 0 2276.9m 348.6m 0.7 4.4 0:21.15 S QtWebEngineProc
1 root 20 0 4.2m 1.4m 0.0 0.0 0:00.40 S init
At that moment Xorg was using most cpu (1.3%).
You got me interested. Evidently a process can use multiple cores when
doing threading. A separate process is created with fork(), but not with
threads ( pthread_create() ). So depending on the number of threads, a
single process can use multiple cpus (cores) and top will show that as a
process with %CPU > 100.

https://stackoverflow.com/questions/807506/threads-vs-processes-in-linux?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa

-- Bruce
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubsc
Ken Moffat
2018-04-09 22:46:26 UTC
Permalink
Post by Bruce Dubbs
Post by Ken Moffat
I meant the bit underneath that - I only have 4 cores (8 on my
haswell) and I use 40 line terms so lots of room for details, here's
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
1321 root 20 0 370.6m 55.4m 1.3 0.7 0:35.85 S Xorg
22670 ken 20 0 3199.1m 146.4m 0.7 1.8 0:06.61 S falkon
22733 ken 20 0 2276.9m 348.6m 0.7 4.4 0:21.15 S QtWebEngineProc
1 root 20 0 4.2m 1.4m 0.0 0.0 0:00.40 S init
At that moment Xorg was using most cpu (1.3%).
You got me interested. Evidently a process can use multiple cores when
doing threading. A separate process is created with fork(), but not with
threads ( pthread_create() ). So depending on the number of threads, a
single process can use multiple cpus (cores) and top will show that as a
process with %CPU > 100.
https://stackoverflow.com/questions/807506/threads-vs-processes-in-linux?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa
-- Bruce
Thanks, that explains it.

ĸen
--
In my seventh decade astride this planet, and as my own cells degrade,
there are some things I cannot do now: skydiving, marathon running,
calculus. I couldn't do them in my 20s either, so no big loss.
-- Derek Smalls, formerly of Spinal Tap
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe:
Bruce Dubbs
2018-04-09 04:02:00 UTC
Permalink
rust-1.25 test report.

On my Haswell (Intel i7-5820K CPU @ 3.8 Ghz, 6 cores hyperthreaded=12
threads) I ran the build and test procedures currently in BLFS for
rust-1.25.

Build completed successfully in 0:30:37
1837.3 Elapsed Time - rustc-1.25.0-src
SBU=19.340
54800 /usr/src/rustc/rustc-1.25.0-src.tar.xz SIZE (53.515 MB)
4191796 kilobytes BUILD SIZE (4093.550 MB)
md5sum : 57295a3c3bedfc21e3c643b397a1f017
/usr/src/rustc/rustc-1.25.0-src.tar.xz

I was watching top and the one minute load factor jumped up to 20 a
couple of times and was above 12 for a substantial amount of time.

I then ran the tests, but forgot to time it. At the end I got a failure:

-
"/tmp/rustc-test/rustc-1.25.0-src/build/x86_64-unknown-linux-gnu/stage0-tools-bin/rustdoc-themes"
"/tmp/rustc-test/rustc-1.25.0-src/build/x86_64-unknown-linux-gnu/stage2/bin/rustdoc"
"/tmp/rustc-test/rustc-1.25.0-src/src/librustdoc/html/static/themes"

Traceback (most recent call last):
File "./x.py", line 20, in <module>
bootstrap.main()
File "/tmp/rustc-test/rustc-1.25.0-src/src/bootstrap/bootstrap.py",
line 763, in main
bootstrap()
File "/tmp/rustc-test/rustc-1.25.0-src/src/bootstrap/bootstrap.py",
line 754, in bootstrap
run(args, env=env, verbose=build.verbose)
File "/tmp/rustc-test/rustc-1.25.0-src/src/bootstrap/bootstrap.py",
line 148, in run
raise RuntimeError(err)
RuntimeError: failed to run:
/tmp/rustc-test/rustc-1.25.0-src/build/bootstrap/debug/bootstrap test
--verbose --no-fail-fast

Running the above manually I got:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value:
NotPresent', /checkout/src/libcore/result.rs:916:5

I could not find anything with checkout, but ./src/libcore/result.rs
line 915-916 says:

/// `Ok(None)` will be mapped to `None`.
/// `Ok(Some(_))` and `Err(_)` will be mapped to `Some(Ok(_))` and
`Some(Err(_))`.

So it may have just been summarizing results when it failed.

=============
Checking the test log:

$ grep 'running .* tests' ../rustc-testlog | awk '{ sum += $2 } END {
print sum }'
15736

$ grep '^test result:' ../rustc-testlog | awk '{ sum += $6 } END {
print sum }'
5

$ grep FAIL ../rustc-testlog
test [compile-fail] compile-fail/issue-37131.rs ... FAILED
test result: FAILED. 2301 passed; 1 failed; 15 ignored; 0 measured; 0
filtered out
test [debuginfo-gdb]
debuginfo/gdb-pretty-struct-and-enums-pre-gdb-7-7.rs ... FAILED
test [debuginfo-gdb] debuginfo/pretty-huge-vec.rs ... FAILED
test [debuginfo-gdb] debuginfo/pretty-uninitialized-vec.rs ... FAILED
test result: FAILED. 82 passed; 3 failed; 24 ignored; 0 measured; 0
filtered out
test [run-make] run-make/sysroot-crates-are-unstable ... FAILED
test result: FAILED. 174 passed; 1 failed; 0 ignored; 0 measured; 0
filtered out

Disk space after running tests: 5.4G

I did not install.

-- Bruce
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blf
Ken Moffat
2018-04-09 23:05:21 UTC
Permalink
- "/tmp/rustc-test/rustc-1.25.0-src/build/x86_64-unknown-linux-gnu/stage0-tools-bin/rustdoc-themes" "/tmp/rustc-test/rustc-1.25.0-src/build/x86_64-unknown-linux-gnu/stage2/bin/rustdoc"
"/tmp/rustc-test/rustc-1.25.0-src/src/librustdoc/html/static/themes"
File "./x.py", line 20, in <module>
bootstrap.main()
File "/tmp/rustc-test/rustc-1.25.0-src/src/bootstrap/bootstrap.py", line
763, in main
bootstrap()
File "/tmp/rustc-test/rustc-1.25.0-src/src/bootstrap/bootstrap.py", line
754, in bootstrap
run(args, env=env, verbose=build.verbose)
File "/tmp/rustc-test/rustc-1.25.0-src/src/bootstrap/bootstrap.py", line
148, in run
raise RuntimeError(err)
/tmp/rustc-test/rustc-1.25.0-src/build/bootstrap/debug/bootstrap test
--verbose --no-fail-fast
NotPresent', /checkout/src/libcore/result.rs:916:5
I could not find anything with checkout, but ./src/libcore/result.rs line
/// `Ok(None)` will be mapped to `None`.
/// `Ok(Some(_))` and `Err(_)` will be mapped to `Some(Ok(_))` and
`Some(Err(_))`.
So it may have just been summarizing results when it failed.
Yeah, it seems to be just plain weird in how it goes about things.
I'm currently adding the following to the [rust] part of
config.toml:

# get reasonably clean output from the test harness
quiet-tests = true

but I don't, for the moment, have a view on whether or not that is a
useful addition.

I earlier tried, in the same part of config.toml
thinlto = false
which is said to make the *compiler* faster. But on my 4-core ryzen
the build time for rust went from low-50s of minutes (some
variation from one run to the next) to over 70 minutes, so I didn't
think any potential savings from using it to compile librsvg and
firefox would be likely to make up that time.
=============
$ grep 'running .* tests' ../rustc-testlog | awk '{ sum += $2 } END { print
sum }'
15736
$ grep '^test result:' ../rustc-testlog | awk '{ sum += $6 } END { print
sum }'
5
$ grep FAIL ../rustc-testlog
test [compile-fail] compile-fail/issue-37131.rs ... FAILED
That one needs llvm built for a thumb (ARM) variant.
test result: FAILED. 2301 passed; 1 failed; 15 ignored; 0 measured; 0
filtered out
test [debuginfo-gdb] debuginfo/gdb-pretty-struct-and-enums-pre-gdb-7-7.rs
... FAILED
test [debuginfo-gdb] debuginfo/pretty-huge-vec.rs ... FAILED
test [debuginfo-gdb] debuginfo/pretty-uninitialized-vec.rs ... FAILED
test result: FAILED. 82 passed; 3 failed; 24 ignored; 0 measured; 0 filtered
out
test [run-make] run-make/sysroot-crates-are-unstable ... FAILED
test result: FAILED. 174 passed; 1 failed; 0 ignored; 0 measured; 0 filtered
out
For my latest attempt (without gdb), one of the debuginfo-gdb tests
passed, 84 failed (they need gdb) and soem others were ignored.

Looking at the reported panics, I got one weird one which might be
related to the invalid opcodes (those seem to be related to building
a debug version of rustlib for use in the tests):

run-make/sysroot-crates-are-unstable
Traceback (most recent call last):
File "test.py", line 64, in <module>
libs = get_all_libs(join(sysroot, 'lib/rustlib/{}/lib'.format(os.environ['TARGET'])))
File "test.py", line 59, in get_all_libs
for f in listdir(dir_path)
OSError: [Errno 2] No such file or directory: 'lib/rustlib/x86_64-unknown-linux-gnu/lib'
make: *** [Makefile:2: all] Error 1

I think I forgot to mention that I'm building with
PYTHON=/usr/bin/python3 but that doesn't seem to be related to the
crash, that happens with both versions of python.

Back to trying to fly too close to the sun, there are a couple of
weeks before we need 1.25 ...

ĸen
--
In my seventh decade astride this planet, and as my own cells degrade,
there are some things I cannot do now: skydiving, marathon running,
calculus. I couldn't do them in my 20s either, so no big loss.
-- Derek Smalls, formerly of Spinal Tap
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
ag
2018-04-09 05:24:55 UTC
Permalink
Post by Bruce Dubbs
Post by Ken Moffat
This is the first of a pair of posts which probably show how out of
my depth I am ;)
While watching rust build/test, I've been running top. My past
experience suggests that a process will normally max out at 100%
(i.e. all of _one_ core), but with rust I'm seeing percentages of
250-300%. Is that normal ?
Do you know that for top pressing 1 (one) will show individual cores?
Pressing t will make them into a pseudo graphical (curses) display. I've
seen a load of over 13 for some long builds using ninja.
There is a much better alternative to top that it might be good to be
known and be used more extensively. It's one of there tools that makes
the life much easier to the user and without any extra cost.

https://hisham.hm/htop/

As a bonus, you might feel a little exciting, if you know that it featured
at least in 2 movies (i think there are three:)

(here is one)

https://twitter.com/shr1k/status/418770588325249025/photo/1

Hisham (the author) is the primary author of LuaRocks and a great coder.

see his github (you might find there interesting things):

https://github.com/hishamhm

Regards,
Αγαθοκλής
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Un
Paul Rogers
2018-04-09 22:04:40 UTC
Permalink
Post by Ken Moffat
While watching rust build/test, I've been running top. My past
experience suggests that a process will normally max out at 100%
(i.e. all of _one_ core), but with rust I'm seeing percentages of
250-300%. Is that normal ?
It does not seem alarming to me with a multicore/hyperthreading CPU when it's told (make) or is programmed to figure out how many, ummm, "processing units" it can access. I regularly see my 4+4 Bloomfield running make -j8 with all 8 running 95-100%. ("I love the sound of the fan running up in the morning.") In fact, I'm disappointed that running LibreOffice it only ever seems to use one core. 8-(

The one caution is that, with make running all 8, some jobs with comples sources can over-commit its 12GB of RAM when running compiles with each forking an embedded assembly, etc.
--
Paul Rogers
***@fastmail.fm
Rogers' Second Law: "Everything you do communicates."
(I do not personally endorse any additions after this line. TANSTAAFL :-)
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/b
Ken Moffat
2018-04-09 22:44:33 UTC
Permalink
Post by Paul Rogers
Post by Ken Moffat
While watching rust build/test, I've been running top. My past
experience suggests that a process will normally max out at 100%
(i.e. all of _one_ core), but with rust I'm seeing percentages of
250-300%. Is that normal ?
It does not seem alarming to me with a multicore/hyperthreading CPU when it's told (make) or is programmed to figure out how many, ummm, "processing units" it can access. I regularly see my 4+4 Bloomfield running make -j8 with all 8 running 95-100%. ("I love the sound of the fan running up in the morning.") In fact, I'm disappointed that running LibreOffice it only ever seems to use one core. 8-(
The one caution is that, with make running all 8, some jobs with comples sources can over-commit its 12GB of RAM when running compiles with each forking an embedded assembly, etc.
Like Bruce, you seem to mis-parse what I was saying. Maybe you say
things differently in your country.

For 8 cores, 8 at 95-100% when compilign is normal and good (with a
load average possibly a bit above 8). Similarly, on this 4 core
machine, 4 at 95-100% is normal and good.

With rust (and also with mprime) I've been seeing up to 400% for one
process.

Actually, when I was retrying earlier the rust testsuite ran a
command called 'foo' at 250%+ : that command name was disconcerting.

ĸen
--
In my seventh decade astride this planet, and as my own cells degrade,
there are some things I cannot do now: skydiving, marathon running,
calculus. I couldn't do them in my 20s either, so no big loss.
-- Derek Smalls, formerly of Spinal Tap
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubs
Paul Rogers
2018-04-10 21:38:07 UTC
Permalink
Post by Bruce Dubbs
You got me interested. Evidently a process can use multiple cores when
doing threading. A separate process is created with fork(), but not with
threads ( pthread_create() ). So depending on the number of threads, a
single process can use multiple cpus (cores) and top will show that as a
process with %CPU > 100.
-- Bruce
Like Bruce, you seem to mis-parse what I was saying. Maybe you say
things differently in your country.
"Two countries separated by a common language," eh? But no, I don't think I misparsed it.
Post by Bruce Dubbs
With rust (and also with mprime) I've been seeing up to 400% for one
process.
I would expect so. But I always expected what Bruce found and mentioned above, that a forked subprocess was a separately dispatchable unit. How would you expect top to attribute the CPU usage in a multiprocessing system? Showing more than 100%, makes sense to me. Imagine what you could see during systemd startup. I've never used it, but I believe I've read about an argument to have top show all the subprocess a particular command may have spawned.
--
Paul Rogers
***@fastmail.fm
Rogers' Second Law: "Everything you do communicates."
(I do not personally endorse any additions after this line. TANSTAAFL :-)
--
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/
Loading...