[gentoo-user] Sudden case fan problems by switching to new kernel...

2020-04-14 Thread tuxic
Hi,

The configuration for kernel 5.6.3. (vanilla) works fine.for me.
Then I changed to kernel 5.6.4 using the same configuration.

Suddenlu the fan at the back of my PC case never stops from
rotating at its highest speed.
Changing back to kernel 5.6.3. and the problem is gone.

On the internet I found one report of a fan problem after
an upgrade of Ubuntu...but that was 5 years ago...

I compared the entries in /sys/devices/platform/nct6775.2592/hwmon/hwmon1/*
between the two kernels but found nothing obvious.

The CPU temperature was around 45° C in both cases.

Waiting does not help also...
(in case kinda 'settling time' has changed...or so...)

Is there any other place to investigate?
What could be the root of this?

Cheers!
Meino








Re: [gentoo-user] xorg-server

2020-04-14 Thread Jorge Almeida
On Tue, Apr 14, 2020 at 9:22 PM Dale  wrote:
>
> Jorge Almeida wrote:

> >>> "Use elogind to get control over framebuffer when running as regular
> >>> user"
> >>>



>
> If you have consolekit, PAM, elogind and such disabled, I'm not sure
> what if anything will change.  I'd think by disabling elogind, you would
> be back to like you was before it *attempted* to add it.  In other
> words, nothing changes.  That's my thinking.

Yes, that seems right. I just added "-elogind" to make.conf and that's
it. But I'm really curious about the framebuffer stuff. As for other
stuff (mounting USB, etc), doing it by hand it's fine.
>
Cheers

Jorge



Re: [gentoo-user] xorg-server

2020-04-14 Thread Dale
Jorge Almeida wrote:
> On Tue, Apr 14, 2020 at 7:29 PM tastytea  wrote:
>> On 2020-04-14T19:16+0100
>> Jorge Almeida  wrote:
>>
>>> "Use elogind to get control over framebuffer when running as regular
>>> user"
>>>
>>> Could someone explain what this entails? What happened before this USE
>>> variable was created? What will I miss if I disable it?
>> ConsoleKit2 is unmaintained, elogind is the replacement. If you don't
>> use systemd, read `eselect news read new` or
>> .
>>
> OK, I get it. I don't use ConsoleKit2, and I have "-consolekit" in
> make.conf, so it's just a matter of adding "-elogind" to make.conf. I
> understand why suddenly updating world wanted to pull PAM.
> What I still would like to understand is what are the consequences of
> [not] enabling this stuff regarding xorg-server. What kind of control
> over the framebuffer is meant by the USE description quoted above?
>
> Thanks
>
> Jorge
>
>


I'm not sure I can answer all your questions but I'll try to provide
some info.  Since it seems you are not using consolekit, PAM or friends,
you must do everything manually when it comes to permissions.  Whether
it is consolekit or elogind, it basically allows users to do certain
things that normally only root can do.  Example, mount USB sticks etc. 
It seems, to me at least, that it also allows graphical environments to
use elogind to manage the session when logged in as well.  I started a
thread about this a while back that should be archived somewhere. 

I'll also add this for those who use elogind already, OP, this may
interest you as well.  I did my usual Sunday upgrade last night.  When
you logout of whatever GUI you use, restart elogind before logging back
in.  I don't recall seeing elogind in the list, it was a long list, but
it seems something upgraded that needs elogind restarted to work right. 
Here is what I noticed that wasn't right.  I could not mount anything
external, USB sticks or my external backup drive.  It would give me a
error about permissions.  Logging into Konsole took minutes instead of
seconds to accept my password.  Logging into the GUI took a long time
to, much longer than usual. Any program that asks for a password, it to
took forever to start if it started at all.  Some just died off and went
to /dev/null.  As soon as I could, I logged out, went to the boot
runlevel, restarted elogind since it is in the boot runlevel and
everything went back to normal.  OP, this may give you some idea what
all elogind does or has a effect on. 

If you have consolekit, PAM, elogind and such disabled, I'm not sure
what if anything will change.  I'd think by disabling elogind, you would
be back to like you was before it *attempted* to add it.  In other
words, nothing changes.  That's my thinking. 

Dale

:-)  :-) 



Re: [gentoo-user] xorg-server

2020-04-14 Thread Jorge Almeida
On Tue, Apr 14, 2020 at 7:29 PM tastytea  wrote:
>
> On 2020-04-14T19:16+0100
> Jorge Almeida  wrote:
>

> > "Use elogind to get control over framebuffer when running as regular
> > user"
> >
> > Could someone explain what this entails? What happened before this USE
> > variable was created? What will I miss if I disable it?
>
> ConsoleKit2 is unmaintained, elogind is the replacement. If you don't
> use systemd, read `eselect news read new` or
> .
>
OK, I get it. I don't use ConsoleKit2, and I have "-consolekit" in
make.conf, so it's just a matter of adding "-elogind" to make.conf. I
understand why suddenly updating world wanted to pull PAM.
What I still would like to understand is what are the consequences of
[not] enabling this stuff regarding xorg-server. What kind of control
over the framebuffer is meant by the USE description quoted above?

Thanks

Jorge



Re: [gentoo-user] xorg-server

2020-04-14 Thread tastytea
On 2020-04-14T19:16+0100
Jorge Almeida  wrote:

> I was going to update world and I just noticed a few strange details.
> For example, xorg-server  has a new (?) USE variable "elogind" which
> appears to be enabled by default. I suppose I can block it in
> package.use, but I'm curious about what it does. In
> https://packages.gentoo.org/useflags/elogind
> I found
> "Use elogind to get control over framebuffer when running as regular
> user"
> 
> Could someone explain what this entails? What happened before this USE
> variable was created? What will I miss if I disable it?

ConsoleKit2 is unmaintained, elogind is the replacement. If you don't
use systemd, read `eselect news read new` or
.

> Jorge Almeida
> 

tastytea

-- 
Get my PGP key with `gpg --locate-keys tasty...@tastytea.de` or at
.


pgpI590Q0eP0d.pgp
Description: Digitale Signatur von OpenPGP


[gentoo-user] xorg-server

2020-04-14 Thread Jorge Almeida
I was going to update world and I just noticed a few strange details.
For example, xorg-server  has a new (?) USE variable "elogind" which
appears to be enabled by default. I suppose I can block it in
package.use, but I'm curious about what it does. In
https://packages.gentoo.org/useflags/elogind
I found
"Use elogind to get control over framebuffer when running as regular user"

Could someone explain what this entails? What happened before this USE
variable was created? What will I miss if I disable it?

Jorge Almeida



Re: [gentoo-user] Understanding fstrim...

2020-04-14 Thread Rich Freeman
On Tue, Apr 14, 2020 at 10:26 AM Wols Lists  wrote:
>
> On 14/04/20 13:51, Rich Freeman wrote:
> > I believe they have
> > to be PCIv3+ and typically have 4 lanes, which is a lot of bandwidth.
>
> My new mobo - the manual says if I put an nvme drive in - I think it's
> the 2nd nvme slot - it disables the 2nd graphics card slot :-(
>

First, there is no such thing as an "nvme slot".  You're probably
describing an M.2 slot.  This matters as I'll get to later...

As I mentioned many motherboards share a PCIe slot with an M.2 slot.
The CPU + chipset only has so many PCIe lanes.  So unless they aren't
already using them for expansion slots they have to double up.  By
doubling up they can basically stick more x2/4/8 PCIe slots into the
motherboard than they could if they completely dedicated them.  Or
they could let that second GPU talk directly to the CPU vs having to
go through the chipset (I think - I'm not really an expert on PCIe),
and let the NVMe talk directly to the CPU if you aren't using that
second GPU.

>
> But using the 1st nvme slot disables a sata slot, which buggers my raid
> up ... :-(
>

While that might be an M.2 slot, it probably isn't an "nvme slot".
M.2 can be used for either SATA or PCIe.  Some motherboards have one,
the other, or both.  And M.2 drives can be either, so you need to be
sure you're using the right one.  If you get the wrong kind of drive
it might not work, or it might end up being a SATA drive when you
intended to use an NVMe.  A SATA drive will have none of the benefits
of NVMe and will be functionally no different from just a regular 2.5"
SSD that plugs into a SATA cable - it is just a different form factor.

It sounds like they doubled up a PCIe port on the one M.2 connector,
and they doubled up a SATA port on the other M.2 connector.  It isn't
necessarily a bad thing, but obviously you need to make tradeoffs.

If you want a motherboard with a dozen x16 PCIe's, 5 M.2's,14 SATA
ports, and 10 USB3's on it there is no reason that shouldn't be
possible, but don't expect to find it in the $60 bargain bin, and
don't expect all those lanes to talk directly to the CPU unless you're
using EPYC or something else high-end.  :)

-- 
Rich



Re: [gentoo-user] Understanding fstrim...

2020-04-14 Thread Wols Lists
On 14/04/20 13:51, Rich Freeman wrote:
> I believe they have
> to be PCIv3+ and typically have 4 lanes, which is a lot of bandwidth.

My new mobo - the manual says if I put an nvme drive in - I think it's
the 2nd nvme slot - it disables the 2nd graphics card slot :-(

Seeing as I need two graphics cards to double-head my system, that means
I can't use two nvmes :-(

But using the 1st nvme slot disables a sata slot, which buggers my raid
up ... :-(

Oh well. That's life :-(

Cheers,
Wol



Re: [gentoo-user] Understanding fstrim...

2020-04-14 Thread Rich Freeman
On Mon, Apr 13, 2020 at 11:32 PM  wrote:
>
> Since I have a NVMe drive on a M.2 socket I would
> be interested at what level/stage (?word? ...sorry...)
> the data go a different path as with the classical sata
> SSDs.
>
> Is this just "protocol" or there is something different?

NVMe involves both hardware, protocol, and of course software changes
driven by these.

First, a disclaimer, I am by no means an expert in storage transport
protocols/etc and obviously there are a ton of standards so that any
random drive works with any random motherboard/etc.  If I missed
something or have any details wrong please let me know.

>From the days of IDE to pre-NVMe on PC the basic model was that the
CPU would talk to a host bus adapter (HBA) which would in turn talk to
the drive controller.  The HBA was usually on the motherboard but of
course it could be in an expansion slot.

The CPU talked to the HBA using the bus standards of the day
(ISA/PCI/PCIe/etc) and this was of course a high-speed bus designed to
work with all kinds of stuff.  AHCI is the latest generation of
protocols for communication between the CPU and the HBA so that any
HBA can work with any OS/etc using generic drivers.  The protocol was
designed with spinning hard drives in mind and has some limitations
with SSD.

The HBA would talk to the drive over SCSI/SAS/SATA/PATA and there are
a bunch of protocols designed for this.  Again, they were designed in
the era of hard drives and have some limitations.

The concept of NVMe is to ditch the HBA and stick the drive directly
on the PCIe bus which is much faster, and streamline the protocols.
This is a little analogous to the shift to IDE from the old days of
separate drive controllers - cutting out a layer of interface
hardware.  Early NVMes just had their own protocols, but a standard
was created so that just as with AHCI the OS can use a single driver
for any drive vendor.

At the hardware level the big change is that NVMe just uses the PCIe
bus.  That M.2 adapter has a different form factor than a regular PCIe
slot, and I didn't look at the whole pinout, but you probably could
just make a dumb adapter to plug a drive right into a PCIe slot as
electronically I think they're just PCIe cards.  I believe they have
to be PCIv3+ and typically have 4 lanes, which is a lot of bandwidth.
And of course it is the same interface used for NICs/graphics/etc so
it is pretty low latency and can support hardware interrupts and all
that stuff.  It is pretty common for motherboards to share an M.2 slot
with a PCIe slot so that you can use one or the other but not both for
this reason - the same lanes are in use for both.

The wikipedia article has a good comparison of the protocol-level
changes.  For the most part it mainly involves making things far more
parallel.  With ATA/AHCI you'd have one queue of commands that was
only a few instructions deep.  With NVMe you can have thousands of
queues with thousands of commands in each and 2048 different hardware
interrupts for the drive to be able to signal back when one command vs
another has completed (although I'm really curious if anybody takes
any kind of advantage of that - unless you have different drivers
trying to use the same drive in parallel it seems like MSI-X isn't
saving much here - maybe if you had a really well-trusted VM or
something or the command set has some way to segment the drive
virtually...).  This basically allows billions of operations to be
in-progress at any given time so there is much less of a bottleneck in
the protocol/interface itself.  NVMe is all about IOPS.

I don't know all the gory details but I wouldn't be surprised that
once you get past this if many of the commands themselves are the same
just to keep things simple.  Or maybe they just rewrote it all from
scratch - I didn't look into it and would be curious to hear from
somebody who has.  Obviously the concepts of read/write/trim/etc are
going to apply regardless of interface.

-- 
Rich