RE: [gentoo-user] clone root from HDD to SSD causes no video with NVIDIA driver

2020-06-09 Thread Raffaele BELARDI
> -Original Message-
> From: J. Roeleveld 
> Sent: Tuesday, June 9, 2020 08:23
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] clone root from HDD to SSD causes no video with
> NVIDIA driver
> 
> For plain console (TTY1,...) you need to enable EFI_FB in the kernel.
> 
> I use Nvidia and also have this enabled in the kernel, so it can work 
> together.
> I also use the nvidia-drivers package provided in Portage. Not everything is
> added, but most is. The RTX/Optix libraries are added when using a "multilib"
> profile, judging from the ebuild.

nomodeset did not change anything, but adding EFI_FB to the kernel finally got 
me a functional console. But if I startx from there I am back again to the same 
point, no X, no console switching with CTR-ALT-Fn, no crash in syslog, I have 
to SSH to get to a working shell. I'm not getting anywhere, I think I'll better 
install from stage3.

Just one more info, when issue 'halt' from the SSH OpenRC scripts are executed 
up to the 'mount-ro' (or similar) script, which fails with
"Remounting / ro failed because we are using /"
And the process hangs there, I need to hit the power switch to power off.

Thanks to all who contributed,

raffaele



Re: [gentoo-user] AMDGPU: computer won't shut down

2020-06-09 Thread Ashley Dixon
On Tue, Jun 09, 2020 at 11:53:55PM +0200, n952162 wrote:
> I posted about this problem perhaps a year ago - when running with
> AMDGPU, my system won't turn off, I have to hold the power button down
> for a long time, forcing it down.
> 
> Thanks to all the great help here, I finally got my system properly
> updated and AMDGPU working, and it  finally talks to my HDMI display. 
> But, it still doesn't shut down.

I assume you are using the AMDGPU drivers [1], but are you also using AMDGPU-PRO
(closed-source accelerator) [2] ?  Also, are you loading the  firmware  directly
into the kernel, or as kernel modules ?

Which GPU series/firmware do you have installed [3] ?

[1] https://wiki.gentoo.org/wiki/AMDGPU
[2] https://wiki.gentoo.org/wiki/AMDGPU-PRO
[3] https://wiki.gentoo.org/wiki/AMDGPU#Incorporating_firmware (see the table)

-- 

Ashley Dixon
suugaku.co.uk

2A9A 4117
DA96 D18A
8A7B B0D2
A30E BF25
F290 A8AA



signature.asc
Description: PGP signature


[gentoo-user] AMDGPU: computer won't shut down

2020-06-09 Thread n952162

I posted about this problem perhaps a year ago - when running with
AMDGPU, my system won't turn off, I have to hold the power button down
for a long time, forcing it down.

At that time, various people commented. but AMDGPU didn't work at all
for me in the end, didn't talk to my HDMI display.

I gave up on it.

Thanks to all the great help here, I finally got my system properly
updated and AMDGPU working, and it  finally talks to my HDMI display. 
But, it still doesn't shut down.

Is it a bug?  Is my system still not right?  Is there something I have
to do yet?




Re: [gentoo-user] Encrypting a hard drive's data. Best method.

2020-06-09 Thread Dale
antlists wrote:
> On 07/06/2020 10:07, antlists wrote:
>> I think it was LWN, there was an interesting article on crypto recently.
>
> https://lwn.net/Articles/821544/
>
> Cheers,
> Wol
>
>


Looks like they are getting ready to toss sha1 overboard.  If it is not
secure, they should.  At least most people who should know, already know
it is not secure.  I didn't have the details but I knew something was
hacked and asked to be sure.  I ended up using a AES one.  I did set up,
using zulucrypt at the moment, to encrypt a 3TB external hard drive.  So
far, it is working fine.  I've closed it a few times and opened it a few
times. 

The one thing I have not figure out yet, about to look into it, how to
mount it to a place I pick.  It's likely a setting I haven't noticed yet
but I'll find it.  Still not ready to even think of doing /home yet
tho.  That's a doozy.  I guess first I'd have to move everything off the
drives it's currently on, encrypt that drive and then move everything
back.  Another thing, having a secure password that is easy to type and
remember but can't be guesses.  Anyone remember the thread when I came
up with a password for PGP and LastPass a year or so ago???  It took me
a month to accomplish that but it is a good one.  Then I got a cell
phone.  Easy to type on the keyboard, not so much on the cell phone
tho.  Still, it works well.  It is a good one.  Good luck to anyone
trying to guess it or even hack it.  lol 

I wish my other drive would have worked.  I'd like to be able to compare
things.  They are both the same brand and size.  Likely even the same
model.  But it has some serious issues.  Even smartctrl was having
trouble chatting with the drive.  That's bad. 

May redo the drive but with command line tools this go around.  Just for
giggles. 

Thanks again.

Dale

:-)  :-) 


Re: [gentoo-user] Gentoo-sources 5.7.x [FIXED]

2020-06-09 Thread Andrew Udvare
On 09/06/2020 12:07, Peter Humphrey wrote:
> On Tuesday, 9 June 2020 15:56:52 BST Peter Humphrey wrote:
> # cat /boot/loader/entries/30-gentoo-5.7.1.conf
> title Gentoo Linux 5.7.1initrd=/intel-uc.img
> linux /vmlinuz-5.7.1-gentoo
> options root=/dev/nvme0n1p4 initrd=/intel-uc.img net.ifnames=0 
> raid=noautodetect

Since you are using systemd-boot or something that fulfils that
specification[1], you can also build your kernel with EFI stub enabled
(CONFIG_EFI_STUB) and then simply put the binary in:

${ESP}/EFI/Linux/vmlinuz-5.7.1-gentoo.efi

You can then run `bootctl set-default vmlinuz-5.7.1-gentoo.efi` or on
the menu, select it and press d to set it within systemd-boot.

(My ASUS motherboard for some reason never lets me write EFI variables
from within Linux so I have to do it from within systemd-boot.)

You can specify the options in the kernel configuration as well:

CONFIG_CMDLINE="root=/dev/nvme0n1p4 initrd=/intel-uc.img net.ifnames=0
raid=noautodetect"

To add the /intel-uc.img to this configuration you either have to
include that in kernel configuration or you can use Dracut to build an
EFI image.

Kernel config:

CONFIG_INITRAMFS_SOURCE="/boot/intel_uc.img"

Or with Dracut:

dracut --force --uefi --uefi-stub
'/usr/lib/systemd/boot/efi/linuxx64.efi.stub' ...

Dracut will automatically pick up your kernel installed to /boot (from
kernel `make install`) and /boot/intel-uc.img (and other similar
things). It will also automatically place the file into

In both cases, you have to remember to update the EFI image/rebuild and
reinstall the kernel whenever you update intel-microcode.

The benefit to this is you don't have to maintain entries files, and you
keep configuration generally in one place: the kernel config. Then you
just drop in EFI binaries into the correct place and they will appear in
the menu. You could have always keep two Linux EFI binaries in
${ESP}/EFI/Linux/ in case the newest one fails.

If you want to do this semi-automatically as part of updates and with
UEFI secure boot signing, use my project: https://github.com/tatsh/upkeep

[1] https://systemd.io/BOOT_LOADER_SPECIFICATION/

--
Andrew




signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] Gentoo-sources 5.7.x [FIXED]

2020-06-09 Thread Neil Bothwick
On Tue, 09 Jun 2020 17:07:40 +0100, Peter Humphrey wrote:

> > Nope. Didn't help. All I have now is dredging through the kernel
> > config yet again, or possibly even trying an initrd. I hope I'm not
> > being forced down that road after all these years.  
> 
> It was so simple, and the clue was in an earlier message.
> 
> # cat /boot/loader/entries/30-gentoo-5.7.1.conf
> title Gentoo Linux 5.7.1initrd=/intel-uc.img
> linux /vmlinuz-5.7.1-gentoo
> options root=/dev/nvme0n1p4 initrd=/intel-uc.img net.ifnames=0
> raid=noautodetect
> 
> All I had to do was to remove the slash from initrd=/intel-uc.img. I
> did that in all the .../entries files and 5.4.38 also still boots
> happily.

Did you also remove the leading slash from the kernel? I'm still running
5.4 but I tried removing the slashes from the kernel and initrds and it
booted fine. Thanks for the heads up, I'll be ready when 5.7+ goes
longterm.


-- 
Neil Bothwick

WinErr 008: Broken window - Watch out for glass fragments


pgpsPXYVLVzKT.pgp
Description: OpenPGP digital signature


RE: [gentoo-user] clone root from HDD to SSD causes no video with NVIDIA driver

2020-06-09 Thread Raffaele BELARDI
> -Original Message-
> From: tu...@posteo.de 
> Sent: Tuesday, June 9, 2020 05:44
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] clone root from HDD to SSD causes no video with
> NVIDIA driver
> 
> 
> Hi,
> 
> if even displaying the console login failed, then the hole display system has
> gone nuts...but since the boot process as such (that is:
> the bios prompt right after POSTing) is visible, I would say, that there is no
> physical problem (that is: cable connected to port 2 of the monitor while the
> monitor is switched to port 1 and such).
> 
> I would try this:
> Boot your PC, ssh into the PC and download the according nvidia-drivers
> directly from NVIDIA of the same version.
> 
> quickpkg the installed drivers and remove them
> 
> Check whether /usr/src/linux links to the kernel sources of the kernel
> version you are booting.
> 
> Install the NVIDIA-drivers you have downloaded.
> 
> Reboot.
> 
> Background:
> The portage package does not install nvidia-drivers correctly - in my case, X
> and such works fine but RTX/Optix which is used by Blender was defunc.
> After installing the original package and masked the one which came with
> portage everything works fine.

But the portage driver works on this same system when booted from the HDD 
instead of the SSD so I'd think the driver is ok, unless it has some dependency 
on UEFI vs MBR. That would be strange, but anything is possible.

Raffaele



RE: [gentoo-user] clone root from HDD to SSD causes no video with NVIDIA driver

2020-06-09 Thread Raffaele BELARDI
> -Original Message-
> From: Ashley Dixon 
> Sent: Tuesday, June 9, 2020 05:52
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] clone root from HDD to SSD causes no video with
> NVIDIA driver
> 
> On Tue, Jun 09, 2020 at 05:43:33AM +0200, tu...@posteo.de wrote:
> > I would try this:
> > Boot your PC, ssh into the PC and download the according
> > nvidia-drivers directly from NVIDIA of the same version.
> 
> > On 06/08 06:20, Raffaele BELARDI wrote:
> > > No console except SSH. I'm not sure I can invoke startx from an SSH.
> 
> Irrelevant aside:
> 
> The  kernel  loads  the  graphics  drivers  on  boot;  it  is  no   longer   
> the
> responsibility of X under normal circumstances.  Assuming you can get access
> to the kernel command line arguments (with grub, this can be done from  the
> bootup menu [1]), passing the `nomodeset` option will prevent the NVIDIA
> drivers  from loading until you start the X server. There is no need for SSH
> here.
> 
> [1] https://ubuntuforums.org/showthread.php?t=1613132

Interesting, I will give the nomodeset a try. But I still don't understand the 
mechanism used by the kernel to load proprietary driver, I assumed that had to 
be done via modprobe, and I think I disabled the OpenRC script firing that.

Raffaele



RE: [gentoo-user] clone root from HDD to SSD causes no video with NVIDIA driver

2020-06-09 Thread Raffaele BELARDI
> -Original Message-
> For plain console (TTY1,...) you need to enable EFI_FB in the kernel.

I read that it may conflict with the NVIDIA proprietary driver [1] so I did not 
enable it. I'll give it a try.
 
> As you came from an older, non-GPT setup, I am assuming this is also the first
> attempt to boot using EFI?
>
> Joost

On this PC, yes. I managed to have a similar setup (UEFI/Win10/Gentoo converted 
from MBR to GPT) working on a different, Noveau-based PC.

Raffele

[] https://wiki.gentoo.org/wiki/NVIDIA/nvidia-drivers search EFI




Re: [gentoo-user] Gentoo-sources 5.7.x [FIXED]

2020-06-09 Thread Peter Humphrey
On Tuesday, 9 June 2020 15:56:52 BST Peter Humphrey wrote:
> On Tuesday, 9 June 2020 15:46:43 BST Peter Humphrey wrote:
> > > Other than that, the naming scheme may have changed but I don't know
> > > about
> > > this. For better future-proofing, use a UUID of your root partition
> > > rather
> > > than a device name.
> > > 
> > > root=UUID=...
> > > 
> > > You can get this UUID with the blkid command.
> > 
> > I'll try this in a minute - thanks for the idea. I've stuck with device
> > names so far because (i) I can read them, and (ii) I can't ever have more
> > than one NVMe device in this box.
> 
> Nope. Didn't help. All I have now is dredging through the kernel config yet
> again, or possibly even trying an initrd. I hope I'm not being forced down
> that road after all these years.

It was so simple, and the clue was in an earlier message.

# cat /boot/loader/entries/30-gentoo-5.7.1.conf
title Gentoo Linux 5.7.1initrd=/intel-uc.img
linux /vmlinuz-5.7.1-gentoo
options root=/dev/nvme0n1p4 initrd=/intel-uc.img net.ifnames=0 raid=noautodetect

All I had to do was to remove the slash from initrd=/intel-uc.img. I did that
in all the .../entries files and 5.4.38 also still boots happily.

Thanks to those who offered help.

-- 
Regards,
Peter.






Re: [gentoo-user] Gentoo-sources 5.7.x

2020-06-09 Thread Peter Humphrey
On Tuesday, 9 June 2020 16:45:56 BST Alan Mackenzie wrote:
> Hello, Peter.
> 
> Either an annoyance, or some potentially useful info:
> 
> On Tue, Jun 09, 2020 at 15:46:43 +0100, Peter Humphrey wrote:
> > I'll try this in a minute - thanks for the idea. I've stuck with device
> > names so far because (i) I can read them, and (ii) I can't ever have
> > more than one NVMe device in this box.
> 
> If the reason for the "can't ever" is the lack of M2 slots on your
> motherboard, you can get a PCIe board with an M2 slot on it.  This way
> you can get two NVMe devices in a single box.  Provided you've got enough
> PCIe lanes, and suchlike.  This is precisely my setup, where I've got two
> 500 Gb NVMe's in a raid-1 configuration.

I've heard of that arrangement, but I haven't looked into it because the spec 
says the M2 device occupies both PCI-x slots. There may be ways round this, 
but my 256GB drive is enough for me; I do have a couple of 1TB SATA SSDs as 
well.

-- 
Regards,
Peter.






Re: [gentoo-user] Gentoo-sources 5.7.x

2020-06-09 Thread Alan Mackenzie
Hello, Peter.

Either an annoyance, or some potentially useful info:

On Tue, Jun 09, 2020 at 15:46:43 +0100, Peter Humphrey wrote:

> I'll try this in a minute - thanks for the idea. I've stuck with device
> names so far because (i) I can read them, and (ii) I can't ever have
> more than one NVMe device in this box.

If the reason for the "can't ever" is the lack of M2 slots on your
motherboard, you can get a PCIe board with an M2 slot on it.  This way
you can get two NVMe devices in a single box.  Provided you've got enough
PCIe lanes, and suchlike.  This is precisely my setup, where I've got two
500 Gb NVMe's in a raid-1 configuration.

> -- 
> Regards,
> Peter.

-- 
Alan Mackenzie (Nuremberg, Germany).



Re: [gentoo-user] Gentoo-sources 5.7.x

2020-06-09 Thread Peter Humphrey
On Tuesday, 9 June 2020 15:46:43 BST Peter Humphrey wrote:

> > Other than that, the naming scheme may have changed but I don't know about
> > this. For better future-proofing, use a UUID of your root partition rather
> > than a device name.
> > 
> > root=UUID=...
> > 
> > You can get this UUID with the blkid command.
> 
> I'll try this in a minute - thanks for the idea. I've stuck with device
> names so far because (i) I can read them, and (ii) I can't ever have more
> than one NVMe device in this box.

Nope. Didn't help. All I have now is dredging through the kernel config yet 
again, or possibly even trying an initrd. I hope I'm not being forced down 
that road after all these years.

-- 
Regards,
Peter.






Re: [gentoo-user] Gentoo-sources 5.7.x

2020-06-09 Thread Peter Humphrey
On Monday, 8 June 2020 16:32:07 BST Andrew Udvare wrote:

> Sounds like missing drivers. oldconfig didn't do everything it was
> supposed to. Moving across multiple major versions, this is to be
> expected. A lot of names of things have changed.
> 
> Do a comparison of your configuration between old and new.
> 
> diff -uN old-config-file /usr/src/linux/.config

Hmm. 4570 lines, but much of it can be discounted.

> Make sure to at least enable NVME with CONFIG_BLK_DEV_NVME=y and try
> booting 5.7 again.

Yes, they were both set already - I couldn't have booted 5.4.38 without them.

> Other than that, the naming scheme may have changed but I don't know about
> this. For better future-proofing, use a UUID of your root partition rather
> than a device name.
> 
> root=UUID=...
> 
> You can get this UUID with the blkid command.

I'll try this in a minute - thanks for the idea. I've stuck with device names 
so far because (i) I can read them, and (ii) I can't ever have more than one 
NVMe device in this box.

-- 
Regards,
Peter.






Re: [gentoo-user] Raspberry Pi with 8GB

2020-06-09 Thread james

On 6/8/20 8:27 PM, Alexandru N. Barloiu wrote:

On Mon, 2020-06-08 at 20:16 -0400, james wrote:

Any pointers to codes that create a cluster and run on 64Bit arm low
power boards is welcome to post to this thread, or drop me a private
note.


There is no such thing as cluster for arm. It's just daemons. You equip
each pi with the things it's going to need. You treat them as normal
computers.


Huh. Well, I've run across dozens of projects, some as old as 2015. 
Sure, I have not 'dug into' the details, but they seem to be quite common::


https://magpi.raspberrypi.org/articles/build-a-raspberry-pi-cluster-computer

https://makezine.com/projects/build-a-compact-4-node-raspberry-pi-cluster/

https://projects.raspberrypi.org/en/projects/build-an-octapi/

https://www.hackster.io/aallan/a-4-node-raspberry-pi-cluster-e19273

Granted, the term 'cluster' in the linux world is as open as the word 
'cocktail' at a social gathering; ymmv.





If you don't know how to start...
https://gentoo.osuosl.org/experimental/arm64/



Great link. But I typically avoid all things 'systemD' centric.  (no 
discussion, just my preference).


However I did find this stage 4:


stage4-arm64-minimal-20190115.tar.bz2

at https://gentoo.osuosl.org/experimental/arm64/old/

Any idea how chip specific this stage 4 for arm64 is?



there are actually some modern stage3 images. I suggest you google how
to emulate arm64 using qemu-static. google crossdev as well. There are
wonderful resources on the forums, some of which I participated in.


Not applicable. there are always a myriad of nuances with this approach, 
as I often stray into unique and exotic hardware extensions. Some run 
'clusters' on a collective of ity-bity IoT devices, cause they are 
fairly close together over Rf links. Folks at the companies that build 
chipsets, are very advanced in this venue. Most of it is DoD related and 
quite hush_hush. A billionaires club, so to accurately categorize. But 
there is no issue with gentoo folks finding their own pathways forward 
with clustering arm/micro devices. It is the future and even IoT 
security semantics will be based on each (I0T) nodes performance metrics 
as opposed to traditional security (bloated) codes. These IoT comm 
links, are like a predictable wave. Monitoring the wave, in the RF 
domain, shows where and when a small portion of (for example) field IoT 
sensors are stressed (under a heavier load than normal. So you do not 
have to strictly depend on specific codes and filters to detect anomalies.


Monitor and matching of various domains yields startling results. BATM 
it is more of an art from than consistent technology. Surely the good 
folks of Gentoo will validate a pathway forward.


What I have discovered is there are an enormous amount of very technical 
folks that routinely use gentoo, but keep it a secret.




Good luck and happy hacking.



Gentoo, hacking and exotic hardware are more of an addiction than a 
source of joy. Be at peace. and


THANKS for the link,
James




Re: [gentoo-user] 100% CPU load is different to 100% CPU load?

2020-06-09 Thread tuxic
On 06/09 07:06, Rich Freeman wrote:
> On Tue, Jun 9, 2020 at 5:17 AM  wrote:
> >
> > What is the difference between 100% CPU load and 100% CPU load to
> > create such an difference in temperature?
> > How is X% load calculated?
> >
> 
> I think a lot more detail around what you're actually running would be
> needed to provide more insight here.  I can think of a few things that
> could cause this:
> 
> The kernel maintains a number of stats around CPU load including %
> utilized by userspace, the kernel, IRQs, IO waiting, and truly idle
> time.  Depending on where you were getting that "100%" figure I can't
> be sure what was included in that.  If it was just 100-idle then you
> could have had IO waiting included in the load - this is time when you
> have a running process that wants to do something but it is blocked on
> IO, such as reading from a disk.  If you have 12 threads all doing
> random IO from a spinning hard disk you can easily end up with the CPU
> spending a whole lot of time doing nothing, but where it is otherwise
> "busy."  If the stat was system+user then that reflects actual CPU
> processing activity.
> 
> Next, not all instructions are created equally, and a CPU core is a
> fairly complex device that has many subdivisions.  There are circuits
> that do nothing but integer or floating-point math, others that
> fetch/store data, some that do logical comparisons, and so on.  While
> the OS tries to keep all the CPU cores busy, the CPU core itself also
> tries to keep all of its components busy (something aided by
> optimizing compilers).  However, the CPU only has so many instructions
> in the pipeline at one time, and they often have interdependencies,
> and so the CPU can only use out-of-order execution to a limited degree
> to keep all its parts active.  If you get a lot of cache misses the
> CPU might spend a lot of time waiting for memory access.  If you get a
> lot of one type of instruction in a row some parts of the CPU might be
> sitting idle.  Depending on the logical flow you might get a larger or
> smaller number of speculative execution misses that result in waiting
> for the pipeline to be flushed.  This can result in the power
> consumption of the CPU varying even if it is "100% busy."  It could be
> 100% busy executing 1 instruction per clock, or it could be 100% busy
> executing 4 instructions per clock.  Either way the processor queue is
> 100% blocked, but the instructions are being retired at different
> rates.  Modern CPUs can reduce the power consumption by idle
> components so part of a CPU core can be using its maximum power draw
> while another part of the same core can be using very little power.
> The end result of this at scale is that the CPU can produce different
> amounts of heat at 100% use depending on what it is actually doing.
> 
> I'm sure people have written about this extensively because it is very
> important with benchmarking.  When individually given a two uniform
> set of tasks a CPU might execute a certain number of tasks per second,
> but.when you combine the two sets of tasks and run them in parallel
> the CPU might actually be able to perform more tasks per second
> combined, because it is better able to utilize all of its components.
> A lot of synthetic loads may not fully load the CPU unless it was
> designed to balance the types of instructions generated.  Natural
> loads often fail to load a CPU fully due to the need for IO waiting.
> 
> So, I guess we can get back to your original question.  Generally 100%
> load means that from the kernel scheduler's perspective the CPU has
> been assigned threads to execute 100% of the time.  A thread could be
> a big string of no-op instructions and from the kernel's perspective
> that CPU core is 100% busy since it isn't available to assign other
> threads to.
> 
> -- 
> Rich
> 

Hi Rich,

simply: WHOW! :)
Thanks a lot, Sir! ::))

That helps !

Cheers!
Meino






Re: [gentoo-user] 100% CPU load is different to 100% CPU load?

2020-06-09 Thread Rich Freeman
On Tue, Jun 9, 2020 at 5:17 AM  wrote:
>
> What is the difference between 100% CPU load and 100% CPU load to
> create such an difference in temperature?
> How is X% load calculated?
>

I think a lot more detail around what you're actually running would be
needed to provide more insight here.  I can think of a few things that
could cause this:

The kernel maintains a number of stats around CPU load including %
utilized by userspace, the kernel, IRQs, IO waiting, and truly idle
time.  Depending on where you were getting that "100%" figure I can't
be sure what was included in that.  If it was just 100-idle then you
could have had IO waiting included in the load - this is time when you
have a running process that wants to do something but it is blocked on
IO, such as reading from a disk.  If you have 12 threads all doing
random IO from a spinning hard disk you can easily end up with the CPU
spending a whole lot of time doing nothing, but where it is otherwise
"busy."  If the stat was system+user then that reflects actual CPU
processing activity.

Next, not all instructions are created equally, and a CPU core is a
fairly complex device that has many subdivisions.  There are circuits
that do nothing but integer or floating-point math, others that
fetch/store data, some that do logical comparisons, and so on.  While
the OS tries to keep all the CPU cores busy, the CPU core itself also
tries to keep all of its components busy (something aided by
optimizing compilers).  However, the CPU only has so many instructions
in the pipeline at one time, and they often have interdependencies,
and so the CPU can only use out-of-order execution to a limited degree
to keep all its parts active.  If you get a lot of cache misses the
CPU might spend a lot of time waiting for memory access.  If you get a
lot of one type of instruction in a row some parts of the CPU might be
sitting idle.  Depending on the logical flow you might get a larger or
smaller number of speculative execution misses that result in waiting
for the pipeline to be flushed.  This can result in the power
consumption of the CPU varying even if it is "100% busy."  It could be
100% busy executing 1 instruction per clock, or it could be 100% busy
executing 4 instructions per clock.  Either way the processor queue is
100% blocked, but the instructions are being retired at different
rates.  Modern CPUs can reduce the power consumption by idle
components so part of a CPU core can be using its maximum power draw
while another part of the same core can be using very little power.
The end result of this at scale is that the CPU can produce different
amounts of heat at 100% use depending on what it is actually doing.

I'm sure people have written about this extensively because it is very
important with benchmarking.  When individually given a two uniform
set of tasks a CPU might execute a certain number of tasks per second,
but.when you combine the two sets of tasks and run them in parallel
the CPU might actually be able to perform more tasks per second
combined, because it is better able to utilize all of its components.
A lot of synthetic loads may not fully load the CPU unless it was
designed to balance the types of instructions generated.  Natural
loads often fail to load a CPU fully due to the need for IO waiting.

So, I guess we can get back to your original question.  Generally 100%
load means that from the kernel scheduler's perspective the CPU has
been assigned threads to execute 100% of the time.  A thread could be
a big string of no-op instructions and from the kernel's perspective
that CPU core is 100% busy since it isn't available to assign other
threads to.

-- 
Rich



[gentoo-user] 100% CPU load is different to 100% CPU load?

2020-06-09 Thread tuxic
Hi,

yesterday I md5summed my hole system with find.|
xargsmd5sum

With options I set xargs to use all 12 thread and use as much args per
call as possible in one line.

After a while I checked the CPU with glance and it shows, that all
12 cores/threads were "loaded" with 100% each constantly over a
period of about 1h. The CPU temperature rises to about 47°C.

The CPU does not contain a GPU. Graphics goes extra.

Today I created again a CPU load over a comparable duration with
a constant 100% CPU load on each core/thread.

Interesting it does not take more than 3 minutes to reach 65°C CPU
temperature.

Both temperatures reflect the die temperature.

What is the difference between 100% CPU load and 100% CPU load to
create such an difference in temperature?
How is X% load calculated?

Cheers!
Meino







Re: [gentoo-user] clone root from HDD to SSD causes no video with NVIDIA driver

2020-06-09 Thread tuxic
On 06/09 08:23, J. Roeleveld wrote:
> On Tuesday, June 9, 2020 5:43:33 AM CEST tu...@posteo.de wrote:
> > Hi,
> > 
> > if even displaying the console login failed, then the hole display
> > system has gone nuts...but since the boot process as such (that is:
> > the bios prompt right after POSTing) is visible, I would say, that
> > there is no physical problem (that is: cable connected to port 2 of
> > the monitor while the monitor is switched to port 1 and such).
> > 
> > I would try this:
> > Boot your PC, ssh into the PC and download the according
> > nvidia-drivers directly from NVIDIA of the same version.
> > 
> > quickpkg the installed drivers and remove them
> > 
> > Check whether /usr/src/linux links to the kernel
> > sources of the kernel version you are booting.
> > 
> > Install the NVIDIA-drivers you have downloaded.
> > 
> > Reboot.
> > 
> > Background:
> > The portage package does not install nvidia-drivers correctly -
> > in my case, X and such works fine but RTX/Optix which is used
> > by Blender was defunc. After installing the original package
> > and masked the one which came with portage everything works
> > fine.
> > 
> > Cheers!
> > Meino
> > 
> > On 06/08 06:20, Raffaele BELARDI wrote:
> > > > -Original Message-
> > > > From: tu...@posteo.de 
> > > > Sent: Monday, June 8, 2020 18:14
> > > > To: gentoo-user@lists.gentoo.org
> > > > Subject: Re: [gentoo-user] clone root from HDD to SSD causes no video
> > > > with
> > > > NVIDIA driver
> > > > 
> > > > You said, you are able to ssh into your PC.
> > > > I would try the following: Boot the PC, ssh into it and disable the
> > > > start of X. Boot again: Are you getting the console login successfully?
> > > 
> > > X is started by lxdm, which is started by an /etc/local.d/ script. I
> > > removed that, after reboot I no longer see X processes, but no conole
> > > except for SSH. Syslog still shows the nvidia module being loaded. I
> > > removed 'modules' from boot runlevel, nvidia is still loaded. I unmerged
> > > nvidia-drivers, nvidia still loaded. This is puzzling me.> 
> > > > Can you check, whether /dev , /proc , /sys and other directories of a
> > > > special function are created and filled correctly?
> > > > Are the permissions ok?
> > > > Is /run available and setup correctly?
> > > 
> > > To the best of my knowledge yes, they look fine.
> > > 
> > > > Are there any leftovers from the root@hd in /etc/fstab?
> > > 
> > > I rewrote fstab using UUID instead of /dev/sdx, there shouldn't be
> > > problems there.> 
> > > > If you get to console successfully, is it possible to start X from the
> > > > commandline? What is printed on the terminal?
> > > > What does X.log say?
> > > 
> > > No console except SSH. I'm not sure I can invoke startx from an SSH.
> > > 
> > > Thanks,
> > > 
> > > raffaele
> 
> 
> For plain console (TTY1,...) you need to enable EFI_FB in the kernel.
> 
> I use Nvidia and also have this enabled in the kernel, so it can work 
> together.
> I also use the nvidia-drivers package provided in Portage. Not everything is 
> added, but most is. The RTX/Optix libraries are added when using a "multilib" 
> profile, judging from the ebuild.
> 
> As you came from an older, non-GPT setup, I am assuming this is also the 
> first 
> attempt to boot using EFI?
> 
> --
> Joost
> 
> 
> 

Hi Joost,

I am on pure AMD64 with Gentoo...so "multilib" is not an option for
me.

Cheers!
Meino




Re: [gentoo-user] clone root from HDD to SSD causes no video with NVIDIA driver

2020-06-09 Thread J. Roeleveld
On Tuesday, June 9, 2020 5:43:33 AM CEST tu...@posteo.de wrote:
> Hi,
> 
> if even displaying the console login failed, then the hole display
> system has gone nuts...but since the boot process as such (that is:
> the bios prompt right after POSTing) is visible, I would say, that
> there is no physical problem (that is: cable connected to port 2 of
> the monitor while the monitor is switched to port 1 and such).
> 
> I would try this:
> Boot your PC, ssh into the PC and download the according
> nvidia-drivers directly from NVIDIA of the same version.
> 
> quickpkg the installed drivers and remove them
> 
> Check whether /usr/src/linux links to the kernel
> sources of the kernel version you are booting.
> 
> Install the NVIDIA-drivers you have downloaded.
> 
> Reboot.
> 
> Background:
> The portage package does not install nvidia-drivers correctly -
> in my case, X and such works fine but RTX/Optix which is used
> by Blender was defunc. After installing the original package
> and masked the one which came with portage everything works
> fine.
> 
> Cheers!
> Meino
> 
> On 06/08 06:20, Raffaele BELARDI wrote:
> > > -Original Message-
> > > From: tu...@posteo.de 
> > > Sent: Monday, June 8, 2020 18:14
> > > To: gentoo-user@lists.gentoo.org
> > > Subject: Re: [gentoo-user] clone root from HDD to SSD causes no video
> > > with
> > > NVIDIA driver
> > > 
> > > You said, you are able to ssh into your PC.
> > > I would try the following: Boot the PC, ssh into it and disable the
> > > start of X. Boot again: Are you getting the console login successfully?
> > 
> > X is started by lxdm, which is started by an /etc/local.d/ script. I
> > removed that, after reboot I no longer see X processes, but no conole
> > except for SSH. Syslog still shows the nvidia module being loaded. I
> > removed 'modules' from boot runlevel, nvidia is still loaded. I unmerged
> > nvidia-drivers, nvidia still loaded. This is puzzling me.> 
> > > Can you check, whether /dev , /proc , /sys and other directories of a
> > > special function are created and filled correctly?
> > > Are the permissions ok?
> > > Is /run available and setup correctly?
> > 
> > To the best of my knowledge yes, they look fine.
> > 
> > > Are there any leftovers from the root@hd in /etc/fstab?
> > 
> > I rewrote fstab using UUID instead of /dev/sdx, there shouldn't be
> > problems there.> 
> > > If you get to console successfully, is it possible to start X from the
> > > commandline? What is printed on the terminal?
> > > What does X.log say?
> > 
> > No console except SSH. I'm not sure I can invoke startx from an SSH.
> > 
> > Thanks,
> > 
> > raffaele


For plain console (TTY1,...) you need to enable EFI_FB in the kernel.

I use Nvidia and also have this enabled in the kernel, so it can work 
together.
I also use the nvidia-drivers package provided in Portage. Not everything is 
added, but most is. The RTX/Optix libraries are added when using a "multilib" 
profile, judging from the ebuild.

As you came from an older, non-GPT setup, I am assuming this is also the first 
attempt to boot using EFI?

--
Joost