Re: [gentoo-user] NAS box and switching from Phenom II X6 1090T to FX-6300

2024-04-15 Thread Dale
Frank Steinmetzger wrote:
> Am Mon, Apr 15, 2024 at 08:04:15AM -0500 schrieb Dale:
>
>>> The physical connector is called M.2. The dimensions of the “sticks” are 
>>> given in a number such as 2280, meaning 22 mm wide and 80 mm long. There 
>>> are 
>>> different lengths available from 30 to 110 mm. M.2 has different “keys”, 
>>> meaning there are several variants of electrical hookup. Depending on that, 
>>> it can support SATA, PCIe, or both. NVMe is a protocol that usually runs 
>>> via 
>>> PCIe. So for a modern setup, one usually buys NVMe drives, meaning they are 
>>> connected via PCIe either directly to the CPU or over the chipset.
>>>
>>
>> Ahh, that's why some of them look a little different.  I was wondering
>> about that.  Keep in mind, I've never seen one in real life.  Just
>> pictures or videos, or people talking about them on this list. 
> I use one in my 10-year-old PC. The board only provides PCIe 2.0×2 to the 
> slot, so I only get around 1 GB/s instead of 3 which the SSD can reach. But 
> I bought the SSD with the intention of keeping it in the next build and I 
> don’t notice the difference anyways.
>
>>> There is also the other way around that: an adapter card for the M.2 slot 
>>> that gives you SATA ports.
>>>
>> I didn't know that.
> I actually thought we mentioned it already in an earlier “NAS thingy” 
> thread. :)
>
> https://www.reddit.com/r/selfhosted/comments/s0bf1d/m2_sata_expansion_anyone_use_something_like_this/
> https://www.amazon.de/dp/B09FZDQ6ZB
> Maybe you find something if you search for the controller chip (PCIe to 
> SATA): JMB585. From what I’ve just read though, the cheap chines adapters 
> don’t seem to be very sturdy. One person advised to put an adapter M.2 → 
> normal PCIe into the M.2 and then use a normal-formfactor controller card. 
> After all, an M.2 slot is just a PCIe×4 slot with a different connector.
>
> BTW: there are also NVMe SSDs in the old 2.5″ format. This formfactor is 
> called U.2, but beware the enterprise-level prices.

It could have came up but slipped my mind.  Lots of things slip through
nowadays.  :/  Those you linked to are nice.  There are some PCIe cards
that go up to a dozen or so drives and still give pretty good speed.  A
PCIe card would be better for the new build, given the larger number of
sata ports.  Either way, I try to spread it across two connection
points.  Example, I have a data and crypt mount point each having three
hard drives.  All my data mount point drives are on one card.  All my
crypt mount point drives are on one card.  If one card quits all of a
sudden, that whole mount point is gone.  If needed, I could move drives
to the other card until I can replace the card. 


>> I've seen some server type mobos that have SAS connectors which gives
>> several options.  Some of them tend to have more PCIe slots which some
>> regular mobos don't anymore.  Then there is that ECC memory as well.  If
>> the memory doesn't cost to much more, I could go that route.  I'm not
>> sure how much I would benefit from it but data corruption is a thing to
>> be concerned about. 
>> […]
>> The problem with those cards, some of the newer mobos don't have as many
>> PCIe slots to put those cards into anymore.  I think I currently have
>> two such cards in my current rig.  The new rig would hold almost twice
>> the number of drives.  Obviously, I'd need cards with more SATA ports. 
> Indeed consumer boards tend to get fewer normal PCIe slots. Filtering for 
> AM4 boards, the filter allowed me to filter up to 6 slots, whereas for AM5 
> boards, the filter stopped at 4 slots.
> AM4: https://skinflint.co.uk/?cat=mbam4=18869_5%7E20502_UECCDIMM%7E4400_ATX
> AM5: https://skinflint.co.uk/?cat=mbam5=18869_4%7E20502_UECCDIMM%7E4400_ATX

My new build will be a Ryzen 9 7900X which is AM5.  I try to stick with
known good brands of mobos.  I currently use Gigabyte.  I'd be happy
with ASUS and a couple others.  Supermicron I think is a good brand for
server type gear.  I notice all the ones listed in your link for AM5 is
ASUS.  I don't recall ever having one but I've read they are good.  I
wouldn't hesitate to buy one of them.


>> One reason I'm trying not to move to fast right now, besides trying to
>> save up money, I'm trying to find the right CPU, mobo and memory combo. 
>> None of them are cheap anymore.  Just the CPU is going to be around
>> $400.  The mobo isn't to far behind if I go with a non server one. 
> One popular choice for home servers is AM4’s Ryzen Pro 4650G. That’s an APU 
> (so with powerful internal graphics), but also with ECC support (hence the 
> Pro moniker). The APU is popular because 1) on AM4 only APUs have graphics 
> at all, 2) it allows for use as a compact media server, as no bulky GPU is 
> needed.
>
> Speaking of GPU: We’ve had the topic before, but keep in mind that if you go 
> with AM5, you don’t need a dGPU. Unless you go with one of those F 
> processors. So there is one more slot available.
>


I prefer to have a 

Re: [gentoo-user] Using the new binpkgs

2024-04-15 Thread Waldo Lemmer
Hi Peter,

"Profile version" is the correct term here.

I don't have the privileges required to edit the Handbook, but as soon as I
have the time, I will propose a fix and make sure it gets applied.

Thanks for getting back to me.

Regards,
Waldo

On Mon, Apr 15, 2024, 16:04 Peter Humphrey  wrote:

> On Monday, 15 April 2024 13:24:59 BST Waldo Lemmer wrote:
>
> > I'd like to understand your confusion. Where did you get 27 from?
>
> From ref 1, viz:
> "The architecture and profile targets within the sync-uri value do matter
> and
> should align to the respective computer architecture (amd64 in this case)
> and
> system profile selected in the Choosing the right profile section."
>
> I think it should refer to a family of profiles, or perhaps a series.
> Something
> to refer specifically to, in this case, 23.0.
>
> It might have saved me some sawdust under the finger-nails.  :)
>
> --
> Regards,
> Peter.
>
>
>
>
>


Re: [gentoo-user] Using the new binpkgs

2024-04-15 Thread Peter Humphrey
On Monday, 15 April 2024 13:24:59 BST Waldo Lemmer wrote:

> I'd like to understand your confusion. Where did you get 27 from?

>From ref 1, viz: 
"The architecture and profile targets within the sync-uri value do matter and 
should align to the respective computer architecture (amd64 in this case) and 
system profile selected in the Choosing the right profile section."

I think it should refer to a family of profiles, or perhaps a series. Something 
to refer specifically to, in this case, 23.0.

It might have saved me some sawdust under the finger-nails.  :)

-- 
Regards,
Peter.






Re: [gentoo-user] NAS box and switching from Phenom II X6 1090T to FX-6300

2024-04-15 Thread Frank Steinmetzger
Am Mon, Apr 15, 2024 at 08:04:15AM -0500 schrieb Dale:

> > The physical connector is called M.2. The dimensions of the “sticks” are 
> > given in a number such as 2280, meaning 22 mm wide and 80 mm long. There 
> > are 
> > different lengths available from 30 to 110 mm. M.2 has different “keys”, 
> > meaning there are several variants of electrical hookup. Depending on that, 
> > it can support SATA, PCIe, or both. NVMe is a protocol that usually runs 
> > via 
> > PCIe. So for a modern setup, one usually buys NVMe drives, meaning they are 
> > connected via PCIe either directly to the CPU or over the chipset.
> >
> 
> 
> Ahh, that's why some of them look a little different.  I was wondering
> about that.  Keep in mind, I've never seen one in real life.  Just
> pictures or videos, or people talking about them on this list. 

I use one in my 10-year-old PC. The board only provides PCIe 2.0×2 to the 
slot, so I only get around 1 GB/s instead of 3 which the SSD can reach. But 
I bought the SSD with the intention of keeping it in the next build and I 
don’t notice the difference anyways.

> > There is also the other way around that: an adapter card for the M.2 slot 
> > that gives you SATA ports.
> >
> 
> I didn't know that.

I actually thought we mentioned it already in an earlier “NAS thingy” 
thread. :)

https://www.reddit.com/r/selfhosted/comments/s0bf1d/m2_sata_expansion_anyone_use_something_like_this/
https://www.amazon.de/dp/B09FZDQ6ZB
Maybe you find something if you search for the controller chip (PCIe to 
SATA): JMB585. From what I’ve just read though, the cheap chines adapters 
don’t seem to be very sturdy. One person advised to put an adapter M.2 → 
normal PCIe into the M.2 and then use a normal-formfactor controller card. 
After all, an M.2 slot is just a PCIe×4 slot with a different connector.

BTW: there are also NVMe SSDs in the old 2.5″ format. This formfactor is 
called U.2, but beware the enterprise-level prices.

> I've seen some server type mobos that have SAS connectors which gives
> several options.  Some of them tend to have more PCIe slots which some
> regular mobos don't anymore.  Then there is that ECC memory as well.  If
> the memory doesn't cost to much more, I could go that route.  I'm not
> sure how much I would benefit from it but data corruption is a thing to
> be concerned about. 
> […]
> The problem with those cards, some of the newer mobos don't have as many
> PCIe slots to put those cards into anymore.  I think I currently have
> two such cards in my current rig.  The new rig would hold almost twice
> the number of drives.  Obviously, I'd need cards with more SATA ports. 

Indeed consumer boards tend to get fewer normal PCIe slots. Filtering for 
AM4 boards, the filter allowed me to filter up to 6 slots, whereas for AM5 
boards, the filter stopped at 4 slots.
AM4: https://skinflint.co.uk/?cat=mbam4=18869_5%7E20502_UECCDIMM%7E4400_ATX
AM5: https://skinflint.co.uk/?cat=mbam5=18869_4%7E20502_UECCDIMM%7E4400_ATX

> One reason I'm trying not to move to fast right now, besides trying to
> save up money, I'm trying to find the right CPU, mobo and memory combo. 
> None of them are cheap anymore.  Just the CPU is going to be around
> $400.  The mobo isn't to far behind if I go with a non server one. 

One popular choice for home servers is AM4’s Ryzen Pro 4650G. That’s an APU 
(so with powerful internal graphics), but also with ECC support (hence the 
Pro moniker). The APU is popular because 1) on AM4 only APUs have graphics 
at all, 2) it allows for use as a compact media server, as no bulky GPU is 
needed.

Speaking of GPU: We’ve had the topic before, but keep in mind that if you go 
with AM5, you don’t need a dGPU. Unless you go with one of those F 
processors. So there is one more slot available.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Order is one half of your life, but the other half is nicer.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: [gentoo-dev] Current unavoidable use of xz utils in Gentoo

2024-04-15 Thread Frank Steinmetzger
Am Sun, Mar 31, 2024 at 08:33:20AM -0400 schrieb Rich Freeman:
> (moving this to gentoo-user as this is really getting off-topic for -dev)
> […]
> We're going on almost 20 years since the Snowden revelations, and back
> then the NSA was basically doing intrusion on an industrial scale.

Weeaalll, it’s been 11 years in fact. Considering that is more than 10 
years, one could argue it is approaching 20. ;-)

I can remember the year well because Snowden is the same vintage as I am and 
he became 30 about when this all came out.

-- 
Grüße | Greetings | Salut | Qapla’
Others make mistakes, too -- but we have the most experience in it.


signature.asc
Description: PGP signature


Re: [gentoo-user] NAS box and switching from Phenom II X6 1090T to FX-6300

2024-04-15 Thread Dale
Frank Steinmetzger wrote:
> Am Sat, Apr 13, 2024 at 08:23:27AM -0500 schrieb Dale:
>> Rich Freeman wrote:
>>> On Sat, Apr 13, 2024 at 8:11 AM Dale  wrote:
 My biggest thing right now, finding a mobo with plenty of PCIe slots.
 They put all this new stuff, wifi and such, but remove things I do need,
 PCIe slots.
>>> PCIe and memory capacity seem to have become the way the
>>> server/workstation and consumer markets are segmented.
>>>
>>> AM5 gets you 28x v5 lanes.  SP5 gets you 128x v5 lanes.  The server
>>> socket also has way more memory capacity, though I couldn't quickly
>>> identify exactly how much more due to the ambiguous way in which DDR5
>>> memory channels are referenced all over the place.  Suffice it to say
>>> you can put several times as many DIMMs into a typical server
>>> motherboard, especially if you have two CPUs on it (two CPUs likewise
>>> increases the PCIe capacity).
>> I see lots of mobos with those little hard drives on a stick.  I think
>> they called NVME or something, may have spelling wrong.
> The physical connector is called M.2. The dimensions of the “sticks” are 
> given in a number such as 2280, meaning 22 mm wide and 80 mm long. There are 
> different lengths available from 30 to 110 mm. M.2 has different “keys”, 
> meaning there are several variants of electrical hookup. Depending on that, 
> it can support SATA, PCIe, or both. NVMe is a protocol that usually runs via 
> PCIe. So for a modern setup, one usually buys NVMe drives, meaning they are 
> connected via PCIe either directly to the CPU or over the chipset.
>


Ahh, that's why some of them look a little different.  I was wondering
about that.  Keep in mind, I've never seen one in real life.  Just
pictures or videos, or people talking about them on this list. 


>> For most
>> people, that is likely awesome.  For me, I think I'd be happy with a
>> regular SSD.  Given that, I'd like them to make a mobo where one can say
>> cut off/disable that NVME thing and make use of that "lane" as a PCIe
>> slot(s).  Even if that means having a cable that hooks to the mobo and
>> runs elsewhere to connect PCIe cards. In other words, have one slot
>> that is expandable to say three or four slots with what I think is
>> called a back-plane.
> There is also the other way around that: an adapter card for the M.2 slot 
> that gives you SATA ports.
>

I didn't know that.  I looked on ebay, not sure exactly what to search
for or what they look like but, I found something that looks like
adapter.  I only see one SATA connector but more searching could find
something else. 


>> I have considered getting a server type mobo and CPU for my new build. 
> The only reason I got a server board for my little 4-slot NAS is to get ECC 
> support. (Plus you don’t get non-server Mini-ITX with more than four SATAs). 
> But it runs the smallest i3 I could get. It’s a NAS, not a workstation. It 
> serves files, nothing more. I don’t mind if updates take longer than on a 
> Desktop, which is why I don’t see a point in speccing it out to the top 
> CPU-wise. This only adds cost to acquisition and upkeep.
>
> I just did the profile switch to 23, and it rebuilt 685 packages in a little 
> over six hours, plus 1½ hours for gcc beforehand.
>
>> As you point out, they are packed with features I could likely use. 
> “Could likely”? Which features exactly? As you say yourself:
>

I've seen some server type mobos that have SAS connectors which gives
several options.  Some of them tend to have more PCIe slots which some
regular mobos don't anymore.  Then there is that ECC memory as well.  If
the memory doesn't cost to much more, I could go that route.  I'm not
sure how much I would benefit from it but data corruption is a thing to
be concerned about. 


>> Thing is, the price tag makes me faint and fall out of my chair.  Even
>> used ones that are a couple years old, in the floor I go.  -_-  I looked
>> up a SP5 AMD CPU, pushing $800 just for the CPU on Ebay, used.  The mobo
>> isn't cheap either.  I don't know if that would even serve my purpose. 
> Exactly. Those boards and CPUs are made to run servers that serve entire 
> SMBs so that the employees can work on stuff at the same time. As a one-man 
> entity, I don’t expect you’ll ever really need that raw power. If it’s just 
> for SATA ports, you can get controller cards for those.
>

The problem with those cards, some of the newer mobos don't have as many
PCIe slots to put those cards into anymore.  I think I currently have
two such cards in my current rig.  The new rig would hold almost twice
the number of drives.  Obviously, I'd need cards with more SATA ports. 


>> The biggest thing I need PCIe slots for, drive controllers.  I thought
>> about buying a SAS card and having it branch out into a LOT of drives. 
>> Still, I might need two cards even then. 
> But it would be the most logical choice.
>
>> It's like looking at the cereal isle in a store.  All those choices and
>> most of them . . . 

Re: [gentoo-user] Using the new binpkgs

2024-04-15 Thread Waldo Lemmer
Hi Peter,

I'd like to understand your confusion. Where did you get 27 from?

Cheers,
Waldo

On Mon, Apr 15, 2024, 13:25 Peter Humphrey  wrote:

> On Monday, 15 April 2024 12:19:02 BST Peter Humphrey wrote:
> > Hello list,
> >
> > I've decided to follow the instructions in [1] on one of my machines,
> which
> > runs too hot for my comfort on long emerges, but I need some advice,
> please:
> > where the wiki gives this [2], I'm setting 'amd64' as the  and '27'
> > as the .
> >
> > Then, when I try to emerge a package, I get this:
> >
> > !!! Error fetching binhost package info from
> > 'https://distfiles.gentoo.org/releases/amd64/binpackages/27/x86-64' !!!
> > HTTP Error 404: Not Found
> >
> > Then I tried setting 'default/linux/amd64/23.0/desktop/plasma' as the
> > , but I still got the 404 error.
> >
> > What am I doing wrong?
>
> Sorry about the noise. The answer is simple: go to the ...binpackages page
> and
> look! The 27 should be 23.0.
>
> --
> Regards,
> Peter.
>
>
>
>
>


Re: [gentoo-user] Re: Slightly corrupted file systems when resuming from hibernation

2024-04-15 Thread Michael
On Sunday, 14 April 2024 19:41:41 BST Dr Rainer Woitok wrote:
> Greetings,
> 
> On Friday, 2024-01-05 18:46:09 +0100, I myself wrote:
> > ...
> > since a few month or so off and on my laptop fails to resume from hiber-
> > nation due to the  "dirty bit" being set on  the ext4 "/home" partition.
> 
> I was reading this flickering by on the screen, and it wasn't quite cor-
> rect.  Meanwhile I found this in my "openrc.log":
> 
>fsck.fat 4.2 (2021-01-31)
>There are differences between boot sector and its backup.
>This is mostly harmless. Differences: (offset:original/backup)
>  65:01/00
>  Not automatically fixing this.
>Dirty bit is set. Fs was not properly unmounted and some data may be
> corrupt. Automatically removing dirty bit.
>*** Filesystem was changed ***
>Writing changes.
>/dev/sda1: 368 files, 116600/258078 clusters

Why have you set your /boot to be mounted at boot?

You can run 'fsck.fat -v /dev/sda1' after you unmount it to remove the dirty 
bit (if not already removed) and then change your fstab to 'noauto'.  Just 
remember to remount /boot before you make any changes to your boot manager/
kernels.


>/dev/sdb1: recovering journal
>/dev/sdb1: Clearing orphaned inode 54789026 (uid=1000, gid=1000,
> mode=0100600, size=32768) /dev/sdb1: Clearing orphaned inode 54788311
> (uid=1000, gid=1000, mode=0100600, size=553900) /dev/sdb1: clean,
> 172662/61054976 files, 36598898/244190385 blocks * Filesystems repaired
> 
> So one cause always is some problem  on disk "/dev/sda1/" ("/boot/") and
> another  cause are  one or  more  orphaned inodes  on disk  "/dev/sdb1/"
> ("/home/").   But while  the values of offset,  original and  backup for
> "/dev/sda1/" are  always the same  when this happens,  the number of or-
> phaned inodes  on "/dev/sdb1/"  and the inodes itself change from occur-
> rence to occurrence.  Besides it only happens sporadically when resuming
> from hibernation, not every time.   More precisely, the problem surfaces
> when resuming  from hibernation  but could as well  be caused during the
> hibernation process itself.
> 
> Does this ring some bell somewhere what could cause this?
> 
> Sincerely,
>   Rainer

Unlike the /boot partition, the /home partition has data written to it 
regularly.  The ext4 fs does not perform atomic writes - it is not a CoW fs.  
Therefore a sudden unsync'ed shutdown could leave it in a state of corruption 
- IF for some reason data in memory is not either fully written to disk or 
retained in memory.  The way ACPI interacts with firmware *should* ensure the 
S3 system state does not suspend I/O operations halfway through an inline 
write operation ... but ... MoBo firmware can be notoriously buggy and is 
typically frozen/abandoned within a couple of years by the OEMs.  In addition, 
kernel code changes and any previous symbiosis with the firmware can fall 
apart with a later kernel release.

On one PC of mine, with the same MoBo/CPU and the same version firmware, I 
have over the years experienced a whole repertoire of random problems resuming 
from suspend.  At this point in time I avoid placing this PC in sleep, because 
it always crashes with a Firefox related segfault, some time after waking up.

Check if the situation with /dev/sdb1 improves when you leave your /boot 
unmounted.  This may make more process time available for the system to finish 
I/O operations, which may then allow /dev/sdb1 to suspend cleanly.

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Using the new binpkgs

2024-04-15 Thread Peter Humphrey
On Monday, 15 April 2024 12:19:02 BST Peter Humphrey wrote:
> Hello list,
> 
> I've decided to follow the instructions in [1] on one of my machines, which
> runs too hot for my comfort on long emerges, but I need some advice, please:
> where the wiki gives this [2], I'm setting 'amd64' as the  and '27'
> as the .
> 
> Then, when I try to emerge a package, I get this:
> 
> !!! Error fetching binhost package info from
> 'https://distfiles.gentoo.org/releases/amd64/binpackages/27/x86-64' !!!
> HTTP Error 404: Not Found
> 
> Then I tried setting 'default/linux/amd64/23.0/desktop/plasma' as the
> , but I still got the 404 error.
> 
> What am I doing wrong?

Sorry about the noise. The answer is simple: go to the ...binpackages page and 
look! The 27 should be 23.0.

-- 
Regards,
Peter.






[gentoo-user] Using the new binpkgs

2024-04-15 Thread Peter Humphrey
Hello list,

I've decided to follow the instructions in [1] on one of my machines, which
runs too hot for my comfort on long emerges, but I need some advice, please:
where the wiki gives this [2], I'm setting 'amd64' as the  and '27' as
the .

Then, when I try to emerge a package, I get this:

!!! Error fetching binhost package info from 
'https://distfiles.gentoo.org/releases/amd64/binpackages/27/x86-64'
!!! HTTP Error 404: Not Found

Then I tried setting 'default/linux/amd64/23.0/desktop/plasma' as the ,
but I still got the 404 error.

What am I doing wrong?

1.   
https://wiki.gentoo.org/wiki/Handbook:AMD64/Installation/Base#Optional:_Adding_a_binary_package_host

2.   sync-uri = 
https://distfiles.gentoo.org/releases//binpackages//x86-64/

-- 
Regards,
Peter.






Re: [gentoo-user] NAS box and switching from Phenom II X6 1090T to FX-6300

2024-04-15 Thread Frank Steinmetzger
Am Sat, Apr 13, 2024 at 08:23:27AM -0500 schrieb Dale:
> Rich Freeman wrote:
> > On Sat, Apr 13, 2024 at 8:11 AM Dale  wrote:
> >> My biggest thing right now, finding a mobo with plenty of PCIe slots.
> >> They put all this new stuff, wifi and such, but remove things I do need,
> >> PCIe slots.
> > PCIe and memory capacity seem to have become the way the
> > server/workstation and consumer markets are segmented.
> >
> > AM5 gets you 28x v5 lanes.  SP5 gets you 128x v5 lanes.  The server
> > socket also has way more memory capacity, though I couldn't quickly
> > identify exactly how much more due to the ambiguous way in which DDR5
> > memory channels are referenced all over the place.  Suffice it to say
> > you can put several times as many DIMMs into a typical server
> > motherboard, especially if you have two CPUs on it (two CPUs likewise
> > increases the PCIe capacity).
> 
> I see lots of mobos with those little hard drives on a stick.  I think
> they called NVME or something, may have spelling wrong.

The physical connector is called M.2. The dimensions of the “sticks” are 
given in a number such as 2280, meaning 22 mm wide and 80 mm long. There are 
different lengths available from 30 to 110 mm. M.2 has different “keys”, 
meaning there are several variants of electrical hookup. Depending on that, 
it can support SATA, PCIe, or both. NVMe is a protocol that usually runs via 
PCIe. So for a modern setup, one usually buys NVMe drives, meaning they are 
connected via PCIe either directly to the CPU or over the chipset.

> For most
> people, that is likely awesome.  For me, I think I'd be happy with a
> regular SSD.  Given that, I'd like them to make a mobo where one can say
> cut off/disable that NVME thing and make use of that "lane" as a PCIe
> slot(s).  Even if that means having a cable that hooks to the mobo and
> runs elsewhere to connect PCIe cards. In other words, have one slot
> that is expandable to say three or four slots with what I think is
> called a back-plane.

There is also the other way around that: an adapter card for the M.2 slot 
that gives you SATA ports.

> I have considered getting a server type mobo and CPU for my new build. 

The only reason I got a server board for my little 4-slot NAS is to get ECC 
support. (Plus you don’t get non-server Mini-ITX with more than four SATAs). 
But it runs the smallest i3 I could get. It’s a NAS, not a workstation. It 
serves files, nothing more. I don’t mind if updates take longer than on a 
Desktop, which is why I don’t see a point in speccing it out to the top 
CPU-wise. This only adds cost to acquisition and upkeep.

I just did the profile switch to 23, and it rebuilt 685 packages in a little 
over six hours, plus 1½ hours for gcc beforehand.

> As you point out, they are packed with features I could likely use. 

“Could likely”? Which features exactly? As you say yourself:

> Thing is, the price tag makes me faint and fall out of my chair.  Even
> used ones that are a couple years old, in the floor I go.  -_-  I looked
> up a SP5 AMD CPU, pushing $800 just for the CPU on Ebay, used.  The mobo
> isn't cheap either.  I don't know if that would even serve my purpose. 

Exactly. Those boards and CPUs are made to run servers that serve entire 
SMBs so that the employees can work on stuff at the same time. As a one-man 
entity, I don’t expect you’ll ever really need that raw power. If it’s just 
for SATA ports, you can get controller cards for those.

> The biggest thing I need PCIe slots for, drive controllers.  I thought
> about buying a SAS card and having it branch out into a LOT of drives. 
> Still, I might need two cards even then. 

But it would be the most logical choice.

> It's like looking at the cereal isle in a store.  All those choices and
> most of them . . . . are corn.  ROFL 

Nice one.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

I’ve been using vi for 15 years, because I don’t know with which command
to close it.


signature.asc
Description: PGP signature


Re: [gentoo-user] merge-usr and lib[ow]crypt*

2024-04-15 Thread Joost Roeleveld




--- Original message ---
From: Matthias Hanft 
To: gentoo-user@lists.gentoo.org
Date: Mon, 15 Apr 2024 10:14:18 +0200



Hi,

after updating the kernels to the latest stable version (6.6.21)
and updating the profiles from 17.1 to 23.0, the last update step
would be "merge-usr" as described at https://wiki.gentoo.org/wiki/Merge-usr
in order to have complete up-to-date systems.

But my two (nearly identical) systems generate different output
when running "merge-usr --dryrun":

Dry run on system 1 (a bit older than system 2) shows:

INFO: Migrating files from '/bin' to '/usr/bin'
INFO: Skipping symlink '/bin/awk'; '/usr/bin/awk' already exists
INFO: No problems found for '/bin'
INFO: Migrating files from '/sbin' to '/usr/bin'
INFO: No problems found for '/sbin'
INFO: Migrating files from '/usr/sbin' to '/usr/bin'
INFO: No problems found for '/usr/sbin'
INFO: Migrating files from '/lib' to '/usr/lib'
INFO: No problems found for '/lib'
INFO: Migrating files from '/lib64' to '/usr/lib64'
INFO: No problems found for '/lib64'

So this seems OK? The "awk thing" is a symbolic link anyway:

home01 ~ # ls -l /bin/awk
lrwxrwxrwx 1 root root 14 Dec 31 2022 /bin/awk -> ../usr/bin/awk
home01 ~ # ls -l /usr/bin/awk
lrwxrwxrwx 1 root root 4 Dec 31 2022 /usr/bin/awk -> gawk
home01 ~ # ls -l /usr/bin/gawk
-rwxr-xr-x 1 root root 682216 Feb 10 09:59 /usr/bin/gawk

But the dry run on system 2 (a bit newer than system 1) shows:

INFO: Migrating files from '/bin' to '/usr/bin'
INFO: Skipping symlink '/bin/awk'; '/usr/bin/awk' already exists
INFO: No problems found for '/bin'
INFO: Migrating files from '/sbin' to '/usr/bin'
INFO: No problems found for '/sbin'
INFO: Migrating files from '/usr/sbin' to '/usr/bin'
INFO: No problems found for '/usr/sbin'
INFO: Migrating files from '/lib' to '/usr/lib'
INFO: Skipping symlink '/lib/libcrypt.so.2';  
'/usr/lib/libcrypt.so.2' already exists

INFO: No problems found for '/lib'
INFO: Migrating files from '/lib64' to '/usr/lib64'
INFO: Skipping symlink '/lib64/libcrypt.so.2';  
'/usr/lib64/libcrypt.so.2' already exists

INFO: No problems found for '/lib64'

Since the messages are "INFO" (and not "WARNING" or "ERROR"), I guess
it's OK, too. But looking for "libcrypt*" is somewhat confusing:

n ~ # ls -l /lib/libcrypt*
lrwxrwxrwx 1 root root 17 Apr 14 23:39 /lib/libcrypt.so.1 ->  
libcrypt.so.1.1.0

-rwxr-xr-x 1 root root 218416 Apr 14 23:39 /lib/libcrypt.so.1.1.0
lrwxrwxrwx 1 root root 17 Apr 14 23:39 /lib/libcrypt.so.2 ->  
libcrypt.so.2.0.0

-rwxr-xr-x 1 root root 214320 Apr 14 23:39 /lib/libcrypt.so.2.0.0
n ~ # ls -l /usr/lib/libcrypt*
lrwxrwxrwx 1 root root 27 Apr 14 23:39 /usr/lib/libcrypt.so ->  
../../lib/libcrypt.so.2.0.0

lrwxrwxrwx 1 root root 13 Dec 8 2022 /usr/lib/libcrypt.so.2 -> libowcrypt.so
n ~ # ls -l /usr/lib/libowcrypt*
lrwxrwxrwx 1 root root 27 Apr 14 23:39 /usr/lib/libowcrypt.so ->  
../../lib/libcrypt.so.2.0.0


On system 1, it's a bit more simple:

home01 ~ # ls -l /lib/libcrypt*
lrwxrwxrwx 1 root root 17 Sep 12 2023 /lib/libcrypt.so.1 -> libcrypt.so.1.1.0
-rwxr-xr-x 1 root root 218368 Sep 12 2023 /lib/libcrypt.so.1.1.0
lrwxrwxrwx 1 root root 17 Sep 12 2023 /lib/libcrypt.so.2 -> libcrypt.so.2.0.0
-rwxr-xr-x 1 root root 218368 Sep 12 2023 /lib/libcrypt.so.2.0.0
home01 ~ # ls -l /usr/lib/libcrypt*
lrwxrwxrwx 1 root root 27 Sep 12 2023 /usr/lib/libcrypt.so ->  
../../lib/libcrypt.so.2.0.0


(The same for "lib64" instead of "lib".)

Could the symbolic links from "/usr/lib" to "../../lib" be a kind of
circular references? And what is "libowcrypt" on system 2 - and why
doesn't it appear on system 1?

Is it safe to run "merge-usr" on both systems?


If you only see INFO lines, it's safe to run "merge-usr".
If you see anything else, you need to fix that first.

--
Joost




[gentoo-user] merge-usr and lib[ow]crypt*

2024-04-15 Thread Matthias Hanft
Hi,

after updating the kernels to the latest stable version (6.6.21)
and updating the profiles from 17.1 to 23.0, the last update step
would be "merge-usr" as described at https://wiki.gentoo.org/wiki/Merge-usr
in order to have complete up-to-date systems.

But my two (nearly identical) systems generate different output
when running "merge-usr --dryrun":

Dry run on system 1 (a bit older than system 2) shows:

INFO: Migrating files from '/bin' to '/usr/bin'
INFO: Skipping symlink '/bin/awk'; '/usr/bin/awk' already exists
INFO: No problems found for '/bin'
INFO: Migrating files from '/sbin' to '/usr/bin'
INFO: No problems found for '/sbin'
INFO: Migrating files from '/usr/sbin' to '/usr/bin'
INFO: No problems found for '/usr/sbin'
INFO: Migrating files from '/lib' to '/usr/lib'
INFO: No problems found for '/lib'
INFO: Migrating files from '/lib64' to '/usr/lib64'
INFO: No problems found for '/lib64'

So this seems OK? The "awk thing" is a symbolic link anyway:

home01 ~ # ls -l /bin/awk
lrwxrwxrwx 1 root root 14 Dec 31  2022 /bin/awk -> ../usr/bin/awk
home01 ~ # ls -l /usr/bin/awk
lrwxrwxrwx 1 root root 4 Dec 31  2022 /usr/bin/awk -> gawk
home01 ~ # ls -l /usr/bin/gawk
-rwxr-xr-x 1 root root 682216 Feb 10 09:59 /usr/bin/gawk

But the dry run on system 2 (a bit newer than system 1) shows:

INFO: Migrating files from '/bin' to '/usr/bin'
INFO: Skipping symlink '/bin/awk'; '/usr/bin/awk' already exists
INFO: No problems found for '/bin'
INFO: Migrating files from '/sbin' to '/usr/bin'
INFO: No problems found for '/sbin'
INFO: Migrating files from '/usr/sbin' to '/usr/bin'
INFO: No problems found for '/usr/sbin'
INFO: Migrating files from '/lib' to '/usr/lib'
INFO: Skipping symlink '/lib/libcrypt.so.2'; '/usr/lib/libcrypt.so.2' already 
exists
INFO: No problems found for '/lib'
INFO: Migrating files from '/lib64' to '/usr/lib64'
INFO: Skipping symlink '/lib64/libcrypt.so.2'; '/usr/lib64/libcrypt.so.2' 
already exists
INFO: No problems found for '/lib64'

Since the messages are "INFO" (and not "WARNING" or "ERROR"), I guess
it's OK, too. But looking for "libcrypt*" is somewhat confusing:

n ~ # ls -l /lib/libcrypt*
lrwxrwxrwx 1 root root 17 Apr 14 23:39 /lib/libcrypt.so.1 -> 
libcrypt.so.1.1.0
-rwxr-xr-x 1 root root 218416 Apr 14 23:39 /lib/libcrypt.so.1.1.0
lrwxrwxrwx 1 root root 17 Apr 14 23:39 /lib/libcrypt.so.2 -> 
libcrypt.so.2.0.0
-rwxr-xr-x 1 root root 214320 Apr 14 23:39 /lib/libcrypt.so.2.0.0
n ~ # ls -l /usr/lib/libcrypt*
lrwxrwxrwx 1 root root 27 Apr 14 23:39 /usr/lib/libcrypt.so -> 
../../lib/libcrypt.so.2.0.0
lrwxrwxrwx 1 root root 13 Dec  8  2022 /usr/lib/libcrypt.so.2 -> libowcrypt.so
n ~ # ls -l /usr/lib/libowcrypt*
lrwxrwxrwx 1 root root 27 Apr 14 23:39 /usr/lib/libowcrypt.so -> 
../../lib/libcrypt.so.2.0.0

On system 1, it's a bit more simple:

home01 ~ # ls -l /lib/libcrypt*
lrwxrwxrwx 1 root root 17 Sep 12  2023 /lib/libcrypt.so.1 -> 
libcrypt.so.1.1.0
-rwxr-xr-x 1 root root 218368 Sep 12  2023 /lib/libcrypt.so.1.1.0
lrwxrwxrwx 1 root root 17 Sep 12  2023 /lib/libcrypt.so.2 -> 
libcrypt.so.2.0.0
-rwxr-xr-x 1 root root 218368 Sep 12  2023 /lib/libcrypt.so.2.0.0
home01 ~ # ls -l /usr/lib/libcrypt*
lrwxrwxrwx 1 root root 27 Sep 12  2023 /usr/lib/libcrypt.so -> 
../../lib/libcrypt.so.2.0.0

(The same for "lib64" instead of "lib".)

Could the symbolic links from "/usr/lib" to "../../lib" be a kind of
circular references? And what is "libowcrypt" on system 2 - and why
doesn't it appear on system 1?

Is it safe to run "merge-usr" on both systems?

Thanks,

-Matt