Re: [gentoo-user] UEFI kernel installation?

2019-06-16 Thread Grant Taylor

On 6/16/19 8:37 PM, Manuel McLure wrote:
For example, per IBM for AIX 
(https://developer.ibm.com/articles/au-aix7memoryoptimize3/):


"A more sensible rule is to configure the paging space to be half the 
size of RAM plus 4GB with an upper limit of 32GB. In systems with more 
than 32GB of RAM, or on systems where you are using LPAR and WPAR to 
help split your workload, you can be significantly more selective and 
specific about your memory requirements."


~chuckle~

This brings to mind a story at my last job where someone on another team 
supporting another customer was doing an emergency change to add SAN 
based swap space to an AIX server so that it would not run out of memory 
and crash.  There was a very badly written application that was 
mallocing memory and never freeing it.  So it was leaking memory and AIX 
was swapping the unused pages out of physical RAM to the page files. 
They had about 600 GB of swap allocated and the change was to add 
another 200 GB of swap so that they could make it to the next quarterly 
reboot in a couple of weeks.




Re: [gentoo-user] UEFI kernel installation?

2019-06-16 Thread Grant Taylor

On 6/16/19 7:02 PM, Wols Lists wrote:

So you didn't read what I wrote ... Par for the course :-(


I did.  I still hear people say it today.  It's not old as in past tense.


The basic Unix mechanism needs twice ram.


I disagree.

It's inherent in the design of the thing. Whether linux no longer 
uses the Unix mechanism, or it's had the hell optimised out of it I 
don't know.


Either way, machines today get by on precious little swap - that's 
fine.


Historic note - the early linux 2.4 vanilla kernels enforced the twice 
ram rule - a lot of people who didn't read the release notes got nasty 
shocks when their machines locked up the moment they touched swap ...


I disagree because I ran 2.0, 2.2, 2.4, and 2.6 kernels without swap 
being twice the ram or greater.  Swap did get used.  They did not crash 
when accessing swap.


And okay my machine only has 16GB of ram (and 64GB of swap - 32GB 
each across two disks), but I'm pretty sure that if I followed your 
guidelines, an emerge would crash my system as the tmpfs ran out of 
space ...


I doubt it.

I've routinely done emerges on machines with < 16 GB of memory and 2 GB 
of swap.  Including llvm, clang, gcc, rust, Firefox and Thunderbird.


I routinely do an emerge -DuNe @world on a VPS with 1 GB of memory and 1 
GB of swap.  It works just fine.  If I want to speed things up I enlarge 
the VPS to 2 GB of memory and 1 GB of swap.  Granted, it doesn't try to 
compile things like Firefox and Thunderbird, thus Rust.



And those people who wrote your guidelines?


I just looked again, and Red Hat has lowered their recommendation from 
what I remember from a few years ago.


Link - Table 15.1. Recommended System Swap Space
 - 
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-swapspace#tb-recommended-system-swap-space


RAM ≤ 2 GB = swap should be 2 times the amount of RAM
2 < RAM ≤ 8 GB = swap should be equal to the amount of RAM
8 < RAM ≤ 64 GB = swap should be at least 4 GB
64 < RAM = swap should be at least 4 GB

Are they the same clueless people who believe the twice ram rule is 
pure fiction?


I don't consider Red Hat's official statement to be "clueless".

Seeing as how their rules include "twice the RAM" in the first 
condition, I don't think they thought it was pure fiction.



(As I said, it is *historical* *fact*).


I question the validity of that statement.

And why should I believe people who tell me the rule no longer 
applies, if they can't tell me WHY it no longer applies?  I'd love 
to be enlightened - why can't anybody do that?


I'm not saying you should believe people.

My opinion is that the 2 x RAM no longer applies because systems don't 
utilize swap space.  As such it's a waste of disk space to dedicate 2 x 
RAM to swap.


Look at the output of free.  Or better, run sysstat / SAR and watch swap 
usage.  How much does your system use?  How much disk space to you want 
to dedicate to something that's likely hardly being touched.


Do what you want.

But be prepared to put the shoe on the other foot and explain why you 
think that you should have 2 x RAM on each disk.




Re: [gentoo-user] UEFI kernel installation?

2019-06-16 Thread Manuel McLure
On Sun, Jun 16, 2019 at 6:02 PM Wols Lists  wrote:

> And those people who wrote your guidelines? Are they the same clueless
> people who believe the twice ram rule is pure fiction? (As I said, it is
> *historical* *fact*). And why should I believe people who tell me the
> rule no longer applies, if they can't tell me WHY it no longer applies?
> I'd love to be enlightened - why can't anybody do that?
>
>
What we call UNIX today is an API, not an implementation, and different
UNIX implementations have completely different internals. You'd be
hard-pressed to find any original UNIX code in any modern UNIX. Linux was
written from scratch to implement the UNIX API, but other than some header
files (the subject of the SCO lawsuits) there is no UNIX code in Linux.

Modern UNIXes don't map the swap pages 1:1 to RAM pages like the original
UNIX code did. Instead they only map the pages that actually need to be
swapped. And some pages, such as executable code, don't get swapped at all
- instead the existing on-disk executable file or shared library file is
used as the "swap". Code and static data pages are also shared between
processes, saving even more RAM.

For example, per IBM for AIX (
https://developer.ibm.com/articles/au-aix7memoryoptimize3/):

"A more sensible rule is to configure the paging space to be half the size
of RAM plus 4GB with an upper limit of 32GB. In systems with more than 32GB
of RAM, or on systems where you are using LPAR and WPAR to help split your
workload, you can be significantly more selective and specific about your
memory requirements."

-- 
Manuel A. McLure WW1FA  
...for in Ulthar, according to an ancient and significant law,
no man may kill a cat.   -- H.P. Lovecraft


Re: [gentoo-user] UEFI kernel installation?

2019-06-16 Thread Wols Lists
On 17/06/19 00:29, Grant Taylor wrote:
>> Drives are cheap. The old "swap is twice ram" rule actually isn't an
>> old wife's tale - the basic Unix swap mechanism NEEDS twice ram.
> 
> No, it doesn't.  Not any more.  It hasn't for quite a while.

So you didn't read what I wrote ... Par for the course :-(

That was a *historic* statement. It's as true today as it was ten years
ago, because it's not referring to today's reality.

The basic Unix mechanism needs twice ram. It's inherent in the design of
the thing. Whether linux no longer uses the Unix mechanism, or it's had
the hell optimised out of it I don't know.

Either way, machines today get by on precious little swap - that's fine.

Historic note - the early linux 2.4 vanilla kernels enforced the twice
ram rule - a lot of people who didn't read the release notes got nasty
shocks when their machines locked up the moment they touched swap ...


And okay my machine only has 16GB of ram (and 64GB of swap - 32GB each
across two disks), but I'm pretty sure that if I followed your
guidelines, an emerge would crash my system as the tmpfs ran out of
space ...

And those people who wrote your guidelines? Are they the same clueless
people who believe the twice ram rule is pure fiction? (As I said, it is
*historical* *fact*). And why should I believe people who tell me the
rule no longer applies, if they can't tell me WHY it no longer applies?
I'd love to be enlightened - why can't anybody do that?

Cheers,
Wol



Re: [gentoo-user] UEFI kernel installation?

2019-06-16 Thread Grant Taylor

On 6/16/19 1:14 PM, Wols Lists wrote:

I'd have a single /home partition


I was thinking of the other OS as more of a live distro copied to the 
system than anything else.  I wasn't thinking that the OP wanted to 
actively use the alternate distro frequently.  As such, I figure that 
most customizations can live on the main OS and it's associated home.


Drives are cheap. The old "swap is twice ram" rule actually isn't an 
old wife's tale - the basic Unix swap mechanism NEEDS twice ram.


No, it doesn't.  Not any more.  It hasn't for quite a while.

Swap was FAR more important when there wasn't enough ram for the 
server's workload.  Or when the workload was transient like a multi-user 
system.  (Think terminals and / or telnet and / or ssh sessions for many 
users logged in and sporadically using the system.)


Red Hat's recommendation last I looked was the following:

If the system has ≤ 2 GB of memory, have 4 GB of swap (if you can).
If the system has > 2 GB of memory, have the same amount of swap as memory.
If the system has > 16 GB of memory, have 16 GB of swap.

Take a look at the output of free on most systems.  I'm betting that you 
won't find very much swap used, if any.  So dedicating 64 GB to swap on 
a machine with 32 GB of memory is … silly.  Especially if you never have 
more than about 100 MB ~ 1 GB of swap used (if that).


You can probably get away with < 1 GB of swap on many systems.  But 
there is a different thing where that small amount of swap starts to be 
an issue.  That's when you want to do things like take a dump of kernel 
memory and the stack.  That does need some space.  But I think 1 or 2 GB 
is plenty.


Okay, optimisations turned "must" into "should", and the swap mechanism 
was seriously revamped many moons ago and may have changed things 
completely (I've never managed to get anyone knowledgeable to tell 
me what happened), but what I do is always ...


We've also drastically changed how we use Unix systems.  We no longer 
have 25 ~ 250 people logged into them via terminals.  Now most Unix 
systems are dedicated to a single task, be it web serving, or a 
database, or something else.


Plus, we don't want those workloads to be running in swap, so we give 
the servers more memory thus making them even less likely to hit swap.


Multiply my mobo's *maximum* ram by two. For *each* disk, create a swap 
partition that size. Add all swap partitions in with equal priority.


I think that's bad advice and I discourage that.  Especially if you're 
running all SSDs and your system can take half a TB of memory.  Do you 
/really/ want to dedicate 1 TB of each SSD to swap?  Just how big are 
the SSDs anyway?  ;-)  Also, if you've got eight or more SSDs, your 
recommendation would mean that you have 16 times the memory as swap.  It 
would be even worse on a server with 24 x 2.5" SSDs.  That would be 48 
times the memory.


It has been pointed out that this is not necessarily a good idea, 
a fork bomb would cause havoc because the machine would grind to a 
swap halt long before the OOM killer realised anything was wrong, for 
example, but it suits me especially as I put /tmp and /var/tmp/portage 
on tmpfs.


To each his / her own.



Re: [gentoo-user] AMD RX GPU in Gentoo

2019-06-16 Thread mad.scientist.at.large
Sadly, in my experience graphics cards draw a fairly steady current independent 
of usage (it varies a bit, but less than 20%).   Some of the newer cards may be 
better.  There are utilities for at least some cards to adjust the clock on the 
GPU  (normally used by mad gamers/miners to overclock).   These might be an 
option, cmos draw power at a rate determined by clock frequency and it's a 
squared relationship, half the clock speed means about 1/4 the power draw.  

I would also like to find some low power graphics cards, The only option I'm 
aware of  is to buy older, used cards on ebay.  I have a few machines I'd like 
to run headless most of the time, but basic text or basic graphics would be 
nice occasionally, but I don't want to waste 200W+ on a high end graphics card 
that never gets exercised.

Other than that, I've used an external 80mm fan to blow air across the heat 
sink and out the adjacent slot (after the attached fan failed and i removed it. 
 I mounted the fan in the drive cage.  The fans on graphics cards are generally 
moving air the worst way possible (just like many cpu heat sink/fan combos), 
blasting it into the face of a heatsink at high speed so there is some flow 
through the channels with massive, massive turbulence/noise.  Oddly enough 
although the built in fan had a tachometer the card doesn't pay any attention 
to what it thinks is the fan speed, even if the fan stalls completely so you 
don't have to "fool" the graphics card when removing the provided fan and using 
an "external" fan.


"Would you like to see us rule again, my friend?   All you have to do is follow 
the worms."  Pink Floyd, The Wall, Waiting for the worms




Jun 16, 2019, 3:36 PM by antli...@youngman.org.uk:

> On 11/06/2019 20:21, Alec Ten Harmsel wrote:
>
>>> Plus, my current GT730 is passively cooled. Are there any RX cards that
>>> at least spin down the fans when I'm working on desktop (no
>>> plasma/gnome, simple Openbox with no heavy gpu requirements). I really
>>> like silence!:-)
>>>
>> I can't hear mine at all right now.
>>
>
> The larger the fan, the slower (and quieter) it spins. So if it needs a fan, 
> try and make sure it's a big one.
>
> Cheers,
>
> Wol
>




Re: [gentoo-user] AMD RX GPU in Gentoo

2019-06-16 Thread Wol

On 11/06/2019 20:21, Alec Ten Harmsel wrote:

Plus, my current GT730 is passively cooled. Are there any RX cards that
at least spin down the fans when I'm working on desktop (no
plasma/gnome, simple Openbox with no heavy gpu requirements). I really
like silence!:-)

I can't hear mine at all right now.


The larger the fan, the slower (and quieter) it spins. So if it needs a 
fan, try and make sure it's a big one.


Cheers,

Wol




Re: [gentoo-user] UEFI kernel installation?

2019-06-16 Thread Peter Humphrey
On Sunday, 16 June 2019 18:00:12 BST Grant Taylor wrote:
> On 6/15/19 7:04 AM, Peter Humphrey wrote:
--->8
> > My question is: how much of the bootctl-installed image is essential
> > for booting? In other words, if I install the ~amd64 kernel (5.1.9),
> > what effect will that have on booting the rescue system; and if
> > I install the amd64 kernel (4.19.44), what effect will it have on
> > booting the plasma system?
> 
> I think it largely depends on where things are installed to.
> 
> Do the different installs share a common /boot?  Or do they have
> separate /boot partitions?
> 
> I assume the other file systems are separate e.g. / (root), /home.

The two systems share a /boot partition but are otherwise separate: nothing is 
common to the two systems.

> > In practice, I install the ~amd64 kernel and hope it doesn't affect
> > the rescue system too much; and it seems not to. Could I do better?
> 
> I don't know if it's better or not, but here's what I'd do.
> 
>   · I'd put each OS on it's own drive (if at all possible).
>   · I'd have a separate /boot and / (root) for each OS.
>   · I'd configure UEFI boot menu entries for each OS.

The main drive is a 256GB NVMe, with no spare slots for a second. I do have a 
couple of ordinary SSDs in RAID-1 for data, but they're not significant here. 
I could have separate boot partitions, but I haven't found a need yet. I do 
have separate UEFI boot entries.

My point is that they boot different versions of the kernel, and I wondered 
what risk that involved, since the image installed in the UEFI space cannot be 
the same in the two cases. Mick seems to have answered that.

> That way, the only thing that's common between the OSs is the hardware
> and the UEFI.  They are separate from the time that UEFI boot menu onward.
> 
> I recommend the separate drives so that you can use the OS on the other
> drive to deal with a drive hardware issue.
> 
> I /might/ be compelled to try to leverage the two drives for some swap
> optimization.  I'd probably have a minimal amount of swap on the same
> drive as the OS and more swap on the other drive.  That way each OS has
> some swap in the event that the other drive fails, yet still has more
> swap if the other drive is happy.  So you benefit from the 2nd drive
> without being completely dependent on it.

Ah, swap. I have a 2GB swap partition near the beginning of the drive, pri=8, 
and a 16GB one near the end, pri=4. The latter is supposed to cope with huge 
compilations like chromium. (I tried USE=jumbo-build recently but it ground to 
a silent halt. I haven't spent time yet investigating why.)

To answer another point, I keep most of my user stuff in its own partition 
mounted under ~/common. I've done that for many years, since the days before 
I settled on a permanent distro. I didn't want, say, SuSE fighting with Gentoo 
for rights to my data. Both backups and general flexibility benefit from this 
arrangement.

-- 
Regards,
Peter.






Re: [gentoo-user] UEFI kernel installation?

2019-06-16 Thread Wols Lists
On 16/06/19 18:00, Grant Taylor wrote:
> I don't know if it's better or not, but here's what I'd do.
> 
>  · I'd put each OS on it's own drive (if at all possible).
>  · I'd have a separate /boot and / (root) for each OS.
>  · I'd configure UEFI boot menu entries for each OS.
> 
> That way, the only thing that's common between the OSs is the hardware
> and the UEFI.  They are separate from the time that UEFI boot menu onward.
> 
I'd have a single /home partition, just be aware that this can cause
problems if your different OS's have different levels of eg KDE as they
can fight each other ...

> I recommend the separate drives so that you can use the OS on the other
> drive to deal with a drive hardware issue.
> 
> I /might/ be compelled to try to leverage the two drives for some swap
> optimization.  I'd probably have a minimal amount of swap on the same
> drive as the OS and more swap on the other drive.  That way each OS has
> some swap in the event that the other drive fails, yet still has more
> swap if the other drive is happy.  So you benefit from the 2nd drive
> without being completely dependent on it.

Drives are cheap. The old "swap is twice ram" rule actually isn't an old
wife's tale - the basic Unix swap mechanism NEEDS twice ram. Okay,
optimisations turned "must" into "should", and the swap mechanism was
seriously revamped many moons ago and may have changed things completely
(I've never managed to get anyone knowledgeable to tell me what
happened), but what I do is always ...

Multiply my mobo's *maximum* ram by two. For *each* disk, create a swap
partition that size. Add all swap partitions in with equal priority.

It has been pointed out that this is not necessarily a good idea, a fork
bomb would cause havoc because the machine would grind to a swap halt
long before the OOM killer realised anything was wrong, for example, but
it suits me especially as I put /tmp and /var/tmp/portage on tmpfs.

Cheers,
Wol



Re: [gentoo-user] Kernel 4.19.50 constant kernel hard locks / solved

2019-06-16 Thread Corbin Bird
On 6/16/19 5:34 AM, Adam Carter wrote:
> On Sat, Jun 15, 2019 at 6:15 AM Corbin Bird  > wrote:
> 
> Disclosure :
> 1 : The CPU is a AMD FX-9590 ( Fam15h )
> 2 : Kernel command line parameter "eagerfpu=on" is being used.
> 
> This kernel option was causing constant kernel hard locks.
> -
> General setup --> [ ] CPU isolation.
> -
> Anything that accessed the FPU would cause the CPU's ( SMP ) to "loose
> sync".
> 
> 
> Just to clarify; do i have this right?
> 
> General setup --> [ ] CPU isolation. -> this is the broken setting
> 
> General setup --> [*] CPU isolation. -> this works (and is the default)
> 
> AFAICT in 5.x onward all the lazy FPU code is gone so its just eager.
> 

  General setup --> [*] CPU isolation.

( this is when the lock-ups can be expected / CPU isolation Enabled )

Corbin



Re: [gentoo-user] UEFI kernel installation?

2019-06-16 Thread Grant Taylor

On 6/15/19 7:04 AM, Peter Humphrey wrote:

Hello list,


Hi,

The main system on this box is ~amd64 plasma, but I also have a 
small rescue system which is amd64, no desktop. I use bootctl from 
systemd-boot to manage the UEFI images.


I don't have much experience with UEFI.  But I do have some thoughts.

My question is: how much of the bootctl-installed image is essential 
for booting? In other words, if I install the ~amd64 kernel (5.1.9), 
what effect will that have on booting the rescue system; and if 
I install the amd64 kernel (4.19.44), what effect will it have on 
booting the plasma system?


I think it largely depends on where things are installed to.

Do the different installs share a common /boot?  Or do they have 
separate /boot partitions?


I assume the other file systems are separate e.g. / (root), /home.

In practice, I install the ~amd64 kernel and hope it doesn't affect 
the rescue system too much; and it seems not to. Could I do better?


I don't know if it's better or not, but here's what I'd do.

 · I'd put each OS on it's own drive (if at all possible).
 · I'd have a separate /boot and / (root) for each OS.
 · I'd configure UEFI boot menu entries for each OS.

That way, the only thing that's common between the OSs is the hardware 
and the UEFI.  They are separate from the time that UEFI boot menu onward.


I recommend the separate drives so that you can use the OS on the other 
drive to deal with a drive hardware issue.


I /might/ be compelled to try to leverage the two drives for some swap 
optimization.  I'd probably have a minimal amount of swap on the same 
drive as the OS and more swap on the other drive.  That way each OS has 
some swap in the event that the other drive fails, yet still has more 
swap if the other drive is happy.  So you benefit from the 2nd drive 
without being completely dependent on it.




Re: [gentoo-user] Emerge wants to downgrade icu

2019-06-16 Thread netfab
Le 16/06/19 à 10:22, Alarig Le Lay a tapoté :
> Hi, sorry for the delay.
> 
> On mer. 12 juin 12:31:31 2019, netfab wrote:
> > Try --autounmask-backtrack=y emerge option.
> 
> Thanks a lot, it worked.
> But why portage bothers about icu so suddenly?
> 

Because app-office/libreoffice-bin package (here 6.1.5.2) needs to be
updated/upgraded at each dev-libs/icu update, but it takes time.



Re: [gentoo-user] Kernel 4.19.50 constant kernel hard locks / solved

2019-06-16 Thread Adam Carter
On Sat, Jun 15, 2019 at 6:15 AM Corbin Bird  wrote:

> Disclosure :
> 1 : The CPU is a AMD FX-9590 ( Fam15h )
> 2 : Kernel command line parameter "eagerfpu=on" is being used.
>
> This kernel option was causing constant kernel hard locks.
> -
> General setup --> [ ] CPU isolation.
> -
> Anything that accessed the FPU would cause the CPU's ( SMP ) to "loose
> sync".
>

Just to clarify; do i have this right?

General setup --> [ ] CPU isolation. -> this is the broken setting

General setup --> [*] CPU isolation. -> this works (and is the default)

AFAICT in 5.x onward all the lazy FPU code is gone so its just eager.


Re: [gentoo-user] Where to put Happauge capture card firmware?

2019-06-16 Thread Andrew Udvare
On 16/06/2019 04:34, J. Roeleveld wrote:
> There should be a /lib/firmware containing all the firmware files.
> See the linux-firmware ebuild for the exact location.

Thanks. Installing linux-firmware and adding this file to my initrd in
Dracut configuration solved my issue.




signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] Where to put Happauge capture card firmware?

2019-06-16 Thread J. Roeleveld
On June 16, 2019 8:03:41 AM UTC, Andrew Udvare  wrote:
>I get this error in my boot log:
>
>Jun 16 03:54:55 limelight kernel: cx25840 2-0044: Direct firmware load
>for v4l-cx23885-avcore-01.fw failed with error -2
>Jun 16 03:54:55 limelight kernel: cx25840 2-0044: unable to open
>firmware v4l-cx23885-avcore-01.fw
>
>I can get this file, but I have not figured out where I am supposed to
>put it.
>
>Thanks

There should be a /lib/firmware containing all the firmware files.
See the linux-firmware ebuild for the exact location.

--
Joost
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



Re: [gentoo-user] Emerge wants to downgrade icu

2019-06-16 Thread Alarig Le Lay
Hi, sorry for the delay.

On mer. 12 juin 12:31:31 2019, netfab wrote:
> Try --autounmask-backtrack=y emerge option.

Thanks a lot, it worked.
But why portage bothers about icu so suddenly?

-- 
Alarig



[gentoo-user] Where to put Happauge capture card firmware?

2019-06-16 Thread Andrew Udvare
I get this error in my boot log:

Jun 16 03:54:55 limelight kernel: cx25840 2-0044: Direct firmware load
for v4l-cx23885-avcore-01.fw failed with error -2
Jun 16 03:54:55 limelight kernel: cx25840 2-0044: unable to open
firmware v4l-cx23885-avcore-01.fw

I can get this file, but I have not figured out where I am supposed to
put it.

Thanks



signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] UEFI kernel installation?

2019-06-16 Thread Peter Humphrey
On Saturday, 15 June 2019 16:22:50 BST Mick wrote:
> On Saturday, 15 June 2019 14:04:16 BST Peter Humphrey wrote:
> > Hello list,
> > 
> > The main system on this box is ~amd64 plasma, but I also have a small
> > rescue system which is amd64, no desktop. I use bootctl from systemd-boot
> > to manage the UEFI images.
> > 
> > My question is: how much of the bootctl-installed image is essential for
> > booting? In other words, if I install the ~amd64 kernel (5.1.9), what
> > effect will that have on booting the rescue system; and if I install the
> > amd64 kernel (4.19.44), what effect will it have on booting the plasma
> > system?
> > 
> > In practice, I install the ~amd64 kernel and hope it doesn't affect the
> > rescue system too much; and it seems not to. Could I do better?
> 
> Assuming I've understood correctly your question, since this is the same box
> and both kernels will be booting the same hardware, with similar modules,
> it shouldn't make any odds what OS installation you decide to boot into
> with any of two given kernels.

Let's that's right hope. Thanks Mick.

-- 
Regards,
Peter.