Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".

2024-09-15 Thread Frank Steinmetzger
Am Sat, Sep 14, 2024 at 02:46:35PM -0500 schrieb Dale:

> I was running the command again and when I was checking on it, it
> stopped with this error. 
> 
> 
> 
>   File "/root/dh", line 1209, in 
>     main()
>   File "/root/dh", line 1184, in main
>     directory_hash(dir_path, '', dir_files, checksums)
>   File "/root/dh", line 1007, in directory_hash
>     os.path.basename(old_sums[filename][1])
>  ^^
> KeyError: 'Some Video.mp4'

What was the exact command with which you ran it?
Apparently the directory has a file 'Some Video.mp4', which was not listed 
in an existing checksum file.

I also noticed a problem recently which happens if you give dh a directory 
as argument which has no checksum file in it. Or something like it, I can’t 
reproduce it from memory right now. I have been doing some refactoring 
recently in order to get one-file-per-tree mode working.

> I was doing a second run because I updated some files.  So, it was
> skipping some and creating new for some new ones.  This is the command I
> was running, which may not be the best way. 
> 
> 
> /root/dh -c -f -F 1Checksums.md5 -v

Yeah, using the -c option will clobber any old checksums and re-read all 
files fresh. If you only changed a few files, using the -u option will 
drastically increase speed because only the changed files will be read.
Use the -d option to clean up dangling entries from checksum files.


> Also, what is the best way to handle this type of situation.  Let's say
> I have a set of videos.  Later on I get a better set of videos, higher
> resolution or something.  I copy those to a temporary directory then use
> your dmv script from a while back to replace the old files with the new
> files but with identical names.  Thing is, file is different, sometimes
> a lot different.  What is the best way to get it to update the checksums
> for the changed files?  Is the command above correct? 

dh has some smarts built-in. If you changed a file, then its modification 
timestamp will get udpated. When dh runs in -u mode and it finds a file 
whose timestamp is newer than its associated checksum file, that means the 
file may have been altered since the creation of that checksum. So dh will 
re-hash the file and replace the checksum in the checksum file.


> I'm sometimes pretty good at finding software bugs.  But hey, it just
> makes your software better.  ;-) 

Me too, usually. If it’s not my software, anyways. ^^
But I think you may be the first other of that tool other than me.

-- 
Grüße | Greetings | Salut | Qapla’
Someone who eats oats for 200 years becomes very old.


signature.asc
Description: PGP signature


Re: [gentoo-user] Dolphin confusing different run instances.

2024-09-13 Thread Frank Steinmetzger
Am Fri, Sep 13, 2024 at 01:19:00AM -0500 schrieb Dale:

> >> P. S.  Planning to try that checksum script soon.  It's a large number
> >> of files so it will take a long time to run.  I think you mentioned that
> >> if stopped, it will resume where it left off.
> > Only if it creates checksums, because it knows by the existence of 
> > checksums 
> > where to resume. But if you want to read checksums and verify them, you 
> > need 
> > to use arguments to tell it how many directories to process and how many to 
> > skip at the beginning.
> >
> > Perhaps try it first with a few small directories to get a feel for its 
> > behaviour. The normal way to go is:
> >
> > dh -u [DIR] to create the checksum files
> > dh [DIR] do read it back
> > Use the --skip option to skip the given number of dirs at the beginning.
> >
> > Remember that by default it will not create checksums in directories that 
> > have subdirectories. I know this sounds a little strange, but for a 
> > hierarchy of music albums, this seemed sensible 10 years ago.
> >
> 
> Well, I read through the help page and settled on this.  I might have
> did this wrong.  ;-)
> 
> /root/dh -c -F 1Checksums.md5 -v
> 
> Right now I have the command in /root.  I just did a cd to the parent
> directory I wanted it to work on and then ran that command.  Right now,
> it is working on this bit.
> 
> 
> (dir 141 of 631)
> 
> and
> 
> (file  8079 of 34061)

I am thinking about adding filesize information, but that would require 
updating the status line during the processing of a file instead of only 
between files. That’s not trivial, as it involves timers and threads.

> I was wondering tho, is there a way to make it put all the checksum
> files in one place, like a directory call checksums, and they just all
> go in there?

Hm … from an algorithmic point of view, it would actually not be that 
complicated by creating a shortened filename from the source directory, but 
the real-world use seems a bit far-fetched. Checksums should be close to 
their data. If you have read errors for either, then the other is useless 
anyways. :D

> Or just a single file in the parent directory?  That way
> the files aren't in each directory.

That’s what the -s option is for. This will create only one checksum file at 
the root level for each directory argument. So if you run `dh -us foo/ bar/`, 
then it will go into foo/, create one checksum file there and put all lines 
into that one file, even for subdirectories, and do the same in bar/.

However, at the moment automatically detecting and properly verifying those 
files is still in the works. So I think you have to use the -s option or -F 
all to read them.

> Thing is, can I still just run it
> on one directory if I have a suspected bad one?

Not with one checksum file at the root level for an entire tree. The way I 
would handle this case: run dh -u on the directory of interest and then 
compare the checksums in the root-level file and the newly created file with 
a diff tool. Or copy the lines from the existing checksum file, create a new 
file in the directory of interest, remove the directory part of the paths 
and then run dh on just that directory.

-- 
Grüße | Greetings | Salut | Qapla’
I don’t have a problem with alcohol, just without!


signature.asc
Description: PGP signature


Re: [gentoo-user] Dolphin confusing different run instances.

2024-09-12 Thread Frank Steinmetzger
Am Thu, Sep 12, 2024 at 08:53:17AM -0500 schrieb Dale:

> Frank Steinmetzger wrote:
> > Am Thu, Sep 12, 2024 at 07:54:25AM -0500 schrieb Dale:
> >> Howdy,
> >>
> >> I use Dolphin a lot.  I like it and all but recently, it started doing
> >> something that annoys me.  When I'm doing something, I tend to open a
> >> instance of Dolphin for whatever it is I'm doing.  I also leave
> >> instances open and ready for when I do routine things.  Some things I do
> >> so often, I leave them open all the time.  Usually that is four
> >> instances.  If needed, for example when I'm getting videos off trail
> >> cameras, I open another instance until I'm done with that task.  So, I
> >> use Dolphin for different things on different desktops with tabs in
> >> different places.  It just makes things easier, faster and works best
> >> for me. 
> >> […]
> > Dolphin settings, very first page, very first setting: set it to open a 
> > fixed location at startup. Then it will not restore any previous internal 
> > state.
> 
> I saw that setting.  First place I looked.  Thing is, since I didn't
> want it to always start at the same place, I thought that wouldn't
> work.  I thought that no matter what I clicked, it would open at that
> place.  Given you said that would work, I tried it.  I set it to /, or
> root, but if I click on a folder on the desktop, sure enough, it starts
> and opens the folder I clicked on.  Did that a few times just to be
> sure.  LOL  I also plugged in a USB stick, mounted it and then told the
> notification thingy to open in File Manager.  Yep, it opened right where
> it should.  I was looking for a instance setting or something since it
> kept copying other running instances and their tabs.  I wouldn't have
> ever thought to try that setting.
> 
> They might want to explain that setting a little bit.  While I saw it, I
> certainly didn't expect it to behave this way.  I expected it to open at
> that location no matter how Dolphin was started. 

Perhaps it’s actually a bug. Even if Dolphin is supposed to restore a 
previous session, it *should* open the location it is given by parameter.

> P. S.  Planning to try that checksum script soon.  It's a large number
> of files so it will take a long time to run.  I think you mentioned that
> if stopped, it will resume where it left off.

Only if it creates checksums, because it knows by the existence of checksums 
where to resume. But if you want to read checksums and verify them, you need 
to use arguments to tell it how many directories to process and how many to 
skip at the beginning.

Perhaps try it first with a few small directories to get a feel for its 
behaviour. The normal way to go is:

dh -u [DIR] to create the checksum files
dh [DIR] do read it back
Use the --skip option to skip the given number of dirs at the beginning.

Remember that by default it will not create checksums in directories that 
have subdirectories. I know this sounds a little strange, but for a 
hierarchy of music albums, this seemed sensible 10 years ago.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Big events may cast their shadow under the eyes.


signature.asc
Description: PGP signature


Re: [gentoo-user] Dolphin confusing different run instances.

2024-09-12 Thread Frank Steinmetzger
Am Thu, Sep 12, 2024 at 07:54:25AM -0500 schrieb Dale:
> Howdy,
> 
> I use Dolphin a lot.  I like it and all but recently, it started doing
> something that annoys me.  When I'm doing something, I tend to open a
> instance of Dolphin for whatever it is I'm doing.  I also leave
> instances open and ready for when I do routine things.  Some things I do
> so often, I leave them open all the time.  Usually that is four
> instances.  If needed, for example when I'm getting videos off trail
> cameras, I open another instance until I'm done with that task.  So, I
> use Dolphin for different things on different desktops with tabs in
> different places.  It just makes things easier, faster and works best
> for me. 
> 
> What I don't like is this, when I open a new instance, it tries to copy
> the last instance I used that is still open.  When I open a new
> instance, I want it to open where I want but not be affected by other
> instances that are running.  Just as a example.  Yesterday I was trying
> to copy videos from my trail cameras to a USB stick while also copying
> and organizing them on my hard drive.  When I put in a USB stick or the
> card from the camera, I click the notification thing and tell it to open
> the USB stick or the card.  Thing is, it tries to copy the instance,
> usually the one I use to watch TV from, which has a lot of open tabs.  I
> have to close all the tabs I don't want to get things like it should be
> to begin with. 

Dolphin settings, very first page, very first setting: set it to open a 
fixed location at startup. Then it will not restore any previous internal 
state.

-- 
Grüße | Greetings | Salut | Qapla’
Abolish Christmas, Joseph confessed everything!


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".

2024-09-07 Thread Frank Steinmetzger
Am Sat, Sep 07, 2024 at 10:37:04AM +0100 schrieb Michael:
> On Friday 6 September 2024 22:41:33 BST Frank Steinmetzger wrote:

> > > > > Someone more knowledgeable should be able to knock out some clever
> > > > > python
> > > > > script to do the same at speed.
> > 
> > And that is exactly what I have written for myself over the last 11 years. I
> > call it dh (short for dirhash). As I described in the previous mail, I use
> > it to create one hash files per directory. But it also supports one hash
> > file per data file and – a rather new feature – one hash file at the root
> > of a tree. Have a look here: https://github.com/felf/dh
> > Clone the repo or simply download the one file and put it into your path.
> 
> Nice!  I've tested it briefly here.  You've put quite some effort into this.  
> Thank you Frank!
> 
> Probably not your use case, but I wonder how it can be used to compare SOURCE 
> to DESTINATION where SOURCE is the original fs and DESTINATION is some 
> backup, 
> without having to copy over manually all different directory/subdirectory 
> Checksums.md5 files.

When I have this problem, I usually diff the checksum files with mc or vim, 
because I don’t usually have to check many directories and files. You could 
use Krusader, a two-panel file manager. This has a synchronise tool with a 
file filter, so you synchronize two sides, check for file content and filter 
for *.md5.

> I suppose rsync can be used for the comparison to a backup fs anyway, your 
> script would be duplicating a function unnecessarily.

I believe rsync is capable of only syncing only files that match a pattern. 
But it was not very easy to achieve, I think.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

They say that memory is the second thing to go...
I forgot what the first thing was.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".

2024-09-06 Thread Frank Steinmetzger
Am Fri, Sep 06, 2024 at 01:21:20PM +0100 schrieb Michael:

> > > find path-to-directory/ -type f | xargs md5sum > digest.log
> > > 
> > > then to compare with a backup of the same directory you could run:
> > > 
> > > md5sum -c digest.log | grep FAILED

I had a quick look at the manpage: with md5sum --quiet you can omit the grep 
part.

> > > Someone more knowledgeable should be able to knock out some clever python
> > > script to do the same at speed.

And that is exactly what I have written for myself over the last 11 years. I 
call it dh (short for dirhash). As I described in the previous mail, I use 
it to create one hash files per directory. But it also supports one hash 
file per data file and – a rather new feature – one hash file at the root of 
a tree. Have a look here: https://github.com/felf/dh
Clone the repo or simply download the one file and put it into your path.

> > I'll be honest here, on two points.  I'd really like to be able to do
> > this but I have no idea where to or how to even start.  My setup for
> > series type videos.  In a parent directory, where I'd like a tool to
> > start, is about 600 directories.  On a few occasions, there is another
> > directory inside that one.  That directory under the parent is the name
> > of the series.

In its default, my tool ignores directories which have subdirectories. It 
only hashes files in dirs that have no subdirs (leaves in the tree). But 
this can be overridden with the -f option.

My tool also has an option to skip a number of directories and to process 
only a certain number of directories.

> > Sometimes I have a sub directory that has temp files;
> > new files I have yet to rename, considering replacing in the main series
> > directory etc.  I wouldn't mind having a file with a checksum for each
> > video in the top directory, and even one in the sub directory.  As a
> > example.
> > 
> > TV_Series/
> > 
> > ├── 77 Sunset Strip (1958)
> > │   └── torrent
> > ├── Adam-12 (1968)
> > ├── Airwolf (1984)

So with my tool you would do
$ dh -f -F all TV_Series
`-F all` causes a checksum file to be created for each data file.

> > What
> > I'd like, a program that would generate checksums for each file under
> > say 77 Sunset and it could skip or include the directory under it.

Unfortunately I don’t have a skip feature yet that skips specific 
directories. I could add a feature that looks for a marker file and then 
skips that directory (and its subdirs).

> > Might be best if I could switch it on or off.  Obviously, I may not want
> > to do this for my whole system.  I'd like to be able to target
> > directories.  I have another large directory, lets say not a series but
> > sometimes has remakes, that I'd also like to do.  It is kinda set up
> > like the above, parent directory with a directory underneath and on
> > occasion one more under that. 
> 
> As an example, let's assume you have the following fs tree:
> 
> VIDEO
>   ├──TV_Series/
>   |  ├── 77 Sunset Strip (1958)
>   |  │   └── torrent
>   |  ├── Adam-12 (1968)
>   |  ├── Airwolf (1984)
>   |
>   ├──Documentaries
>   ├──Films
>   ├──etc.
> 
> You could run:
> 
> $ find VIDEO -type f | xargs md5sum > digest.log
> 
> The file digest.log will contain md5sum hashes of each of your files within 
> the VIDEO directory and its subdirectories.
> 
> To check if any of these files have changed, become corrupted, etc. you can 
> run:
> 
> $ md5sum -c digest.log | grep FAILED
> 
> If you want to compare the contents of the same VIDEO directory on a back up, 
> you can copy the same digest file with its hashes over to the backup top 
> directory and run again:
> 
> $ md5sum -c digest.log | grep FAILED

My tool does this as well. ;-)
In check mode, it recurses, looks for hash files and if it finds them, 
checks all hashes. There is also an option to only check paths and 
filenames, not hashes. This allows to quickly find files that have been 
renamed or deleted since the hash file was created.

> > One thing I worry about is not just memory problems, drive failure but
> > also just some random error or even bit rot.  Some of these files are
> > rarely changed or even touched.  I'd like a way to detect problems and
> > there may even be a software tool that does this with some setup,
> > reminds me of Kbackup where you can select what to backup or leave out
> > on a directory or even individual file level. 

Well that could be covered with ZFS, especially with a redundant pool so it 
can repair itself. Otherwise it will only identify the bitrot, but not be 
able to fix it.

> > Right now, I suspect my backup copy is likely better than my main copy. 

The problem is: if they differ, how do you know which one is good apart from 
watching one from start to finish? You could use vbindiff to first find the 
part that changed. That will at least tell you where the difference is, so 
you could seek to the area of the position in the video.

> This should work in rsync terms:
> 
> rsync -v --checksum 

Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".

2024-09-05 Thread Frank Steinmetzger
Am Thu, Sep 05, 2024 at 06:30:54AM -0500 schrieb Dale:

> > Use rsync with:
> >
> >  --checksum
> >
> > and
> >
> >  --dry-run 

I suggest calculating a checksum file from your active files. Then you don’t 
have to read the files over and over for each backup iteration you compare 
it against.

> > You can also run find to identify which files were changed during the 
> > period 
> > you were running with the dodgy RAM.  Thankfully you didn't run for too 
> > long 
> > before you spotted it.

This. No need to check everything you ever stored. Just the most recent 
stuff, or at maximum, since you got the new PC.

> I have just shy of 45,000 files in 780 directories or so.  Almost 6,000
> in another.  Some files are small, some are several GBs or so.  Thing
> is, backups go from a single parent directory if you will.  Plus, I'd
> want to compare them all anyway.  Just to be sure.

I aqcuired the habit of writing checksum files in all my media directories 
such as music albums, tv series and such, whenever I create one such 
directory. That way even years later I can still check whether the files are 
intact. I actually experienced broken music files from time to time (mostly 
on the MicroSD card in my tablet). So with checksum files, I can verify which 
file is bad and which (on another machine) is still good.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Lettered up the mixes?


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".

2024-09-05 Thread Frank Steinmetzger
Am Thu, Sep 05, 2024 at 10:36:19AM +0100 schrieb Michael:

> > Maybe that it only catches 1-bit errors, but Dale has more broken bits?
> 
> Or it could be Dale's kit is DDR4?

You may be right. We talked about AM5 at great length during the concept 
phase and then I think I actually asked back because in one mail he 
mentioned to have bought an AM4 CPU (5000 series). :D

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Damn Chinese keyboald dlivel!


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".

2024-09-05 Thread Frank Steinmetzger
Am Wed, Sep 04, 2024 at 11:38:01PM +0100 schrieb Michael:

> Some MoBos are more tolerant than others.

> Regarding Dale's question, which has already been answered - yes, anything 
> the 
> bad memory has touched is suspect of corruption.  Without ECC RAM a dodgy 
> module can cause a lot of damage before it is discovered.

Actually I was wondering: DDR5 has built-in ECC. But that’s not the same as the 
server-grade stuff, because it all happens inside the module with no 
communication to the CPU or the OS. So what is the point of it if it still 
causes errors like in Dale’s case?

Maybe that it only catches 1-bit errors, but Dale has more broken bits?

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Says the zero to the eight: “nice belt”.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".

2024-09-04 Thread Frank Steinmetzger
Am Wed, Sep 04, 2024 at 07:09:43PM - schrieb Grant Edwards:

> On 2024-09-04, Dale  wrote:
> 
> > I ordered another set of memory sticks. I figure I will have to send
> > them both back which means no memory at all. I wasn't planning to go to
> > 128GBs yet but guess I am now. [...]
> 
> Good luck.
> 
> […]
> I plugged them in alongside the recently purchased pair. Wouldn't
> work. Either pair of SIMMs worked fine by themselves, but the only way
> I could get both pairs to work together was to drop the clock speed
> down to about a third the speed they were supposed to support.

Indeed that was my first thought when Dale mentioned getting another pair. I 
don’t know if it’s true for all Ryzen chips, but if you use four sticks, 
they may not work at the maximum speed advertised by AMD (not counting in 
overlcocking). If you kept the settings to Auto you shouldn’t get problems, 
but RAM may work slower then. OTOH, since you don’t do hard-core gaming or 
scientific number-crunching, it is unlikely you will notice a difference in 
your every-day computing.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

How can I know what I’m thinking before I hear what I’m saying?


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Package compile failures with "internal compiler error: Segmentation fault".

2024-09-04 Thread Frank Steinmetzger
Am Wed, Sep 04, 2024 at 05:48:29AM -0500 schrieb Dale:

> I wonder how much fun getting this memory replaced is going to be.  o_O 

I once had a bad stick of Crucial Ballistix DDR3. I think it also started 
with GCC segfaults. So I took a picture of the failing memtest, e-mailed 
that to crucial and they sent me instructions what to do.

I keep the packaging of all my tech stuff, so I put the sticks into their 
blister (I bought it as a kit, so I had to send in both sticks), put a paper 
note in for which one was faulty and sent them off to Crucial in Ireland. 
After two weeks or so I got a new kit in the mail. Thankfully by that time I 
had two kits for the maximum of 4 × 8 GiB, so I was able to continue using 
my PC.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

An empty head is easier to nod with.


signature.asc
Description: PGP signature


Re: [gentoo-user] How to setup internal email sending

2024-08-31 Thread Frank Steinmetzger
Am Sat, Aug 31, 2024 at 01:57:25PM -0500 schrieb Dale:

>  I expect any MTA would do the task of sending emails - but since you've
>  installed s-nail check the configuration examples offered here:
> 
>  https://wiki.archlinux.org/title/S-nail
> > Thank you. I will look into this wiki.
> >
> >>>  email seems to be one of the more complicated things to manage in linux.
> > That's why I avoided it so far. :)
> >
> >
> 
> I set this up on my old machine and transferred it over to my new rig. 
> I use mail-mta/ssmtp and it works fine.  I haven't tested it yet on new
> rig but worked on old rig.  Only thing that uses it is SMART for hard
> drives, that I know of anyway.  This is my config file, less comments. 
> /etc/ssmtp/ssmtp.conf
> 
> root=postmaster
> 
> root=rdalek1...@gmail.com  #Change to your preferred email address
>
> […]
> 
> Hope that helps.   Someone else may add to this.  Or correct things. 

Well, the OP wants to have local delivery, not delivery via SMTP to an 
external server. That means the mail is generated by the local service (like 
ZED or smartd), reaches the MTA and that—without any network traffic—puts 
the mail directly into a local folder, where it can be viewed with a reader 
like mutt, or trigger a “you have mail” notification at logon.

I use dma for that. I can’t remember anymore whether I had to set up 
anything specific, but my /etc/dma/dma.conf is completely vanilla and has 
all lines commented out. When I do something like

echo hi | mutt root -s testmail
echo hi | mutt frank -s testmail

I get a new mail in /var/spool/mail/root and /var/spool/mail/frank, 
respectively.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Humans long for immortality, yet they don’t know what to do on a rainy Sunday.


signature.asc
Description: PGP signature


Re: [gentoo-user] Scripting KDE?

2024-08-18 Thread Frank Steinmetzger
Am Mon, Jul 29, 2024 at 02:34:31PM -0500 schrieb Dale:

> I still don't like this small version of a keyboard.  I like the size of
> the old Dell Quietkey from my old rig.  It's old, PS/2 and all but it
> fits my fingers better.  I can't find a reasonably priced replacement. 
> They all seem to be smaller.  How does one type on these little
> things???  I get lots of typos. 

Well the obvious question is then, why did you buy a small keyboard? :o)
Those key sizes are standardised. AFAIK the stardard key raster is 19 mm per 
single-width key. So unless you chose a specifically down-scaled product, it 
should be the same size as it’s ever been. What did you buy anyways?

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

A good pun is its own reword.


signature.asc
Description: PGP signature


Re: [gentoo-user] Scripting KDE?

2024-08-18 Thread Frank Steinmetzger
Am Mon, Jul 29, 2024 at 02:17:13PM +0100 schrieb Peter Humphrey:
> Greetings,
> 
> I'd like to be able to shut a KDE machine down from another room, over SSH. 
> If 
> I do that with a simple 'reboot' command, I lose all my desktop contents. Not 
> surprising, as KDE is not shutting itself down but having the rug yanked out 
> from under it.
> 
> Is there a way to pass a shutdown command to KDE over SSH? Google doesn't 
> help 
> me much, though it has a good deal of stuff on scripting inside KDE.

Process communication in KDE happens with dbus. So whenever you want to 
trigger an action in KDE vom the terminal, this is where you should look.

The first two hits when I searched for "kde dbus logout" are:
https://superuser.com/questions/395820/how-to-properly-end-a-kde-session-from-shell-without-root-privileges
https://discuss.kde.org/t/logout-reboot-and-shutdown-using-the-terminal/743

Perhaps they put you on the right track to your goal.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Suicide is the most honest form of self-criticism.


signature.asc
Description: PGP signature


Re: [gentoo-user] Dolpin and panel won't lock correctly

2024-08-17 Thread Frank Steinmetzger
Am Sat, Aug 17, 2024 at 05:42:28PM -0500 schrieb Dale:

> I knew about the middle button opening in a new tab.  That's a very
> handy feature.  I didn't know about closing tabs and opening a file.  I
> noticed after a recent update, I have to double click everything to open
> it now.  I had it set to open on single click.  I've looked everywhere
> and it seems that setting has disappeared.  It used to be in KDE
> settings and mouse.  Maybe they want us to use the middle button now 

They changed the default for newly set up profiles. But methinks it should 
not have changed your setting. If you go to system settings, there is an 
option “Clicking files or folders” right on the start page.


-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The first 50% of the project takes the first 90% of the time, and
the other 50% of the project takes the other 90% of the time.
– The Mythical Man-Month


signature.asc
Description: PGP signature


Re: [gentoo-user] Dolpin and panel won't lock correctly

2024-08-17 Thread Frank Steinmetzger
Am Sat, Aug 10, 2024 at 09:52:55AM -0500 schrieb Dale:
> Frank Steinmetzger wrote:
> >
> > You can hold the shift key while turning the wheel, this will cause it to 
> > scroll by page, not by three lines at a time. That’s a very nifty feature 
> > since at least KDE 3 (that was when I started my “Linux career”).
> >
> >
> 
> 
> I am still using this feature.  Also, for others who don't know this, it
> works in the panel on the left but also works in the main part where
> directories/files are listed as well.  Holding that shift key makes
> scrolling a LOT faster. 
> 
> I wish I knew about this nifty feature ages ago.  I wonder, what else
> don't I know

The middle mouse button also has nice features. In the browser it opens 
links in a new tab and middle-clicking on a tab closes it (in Dolphin, too). 
Middle-clicking a file in Dolphin opens it with the first open-with-entry of 
the context menu.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Death to all fanatics!


signature.asc
Description: PGP signature


Re: [gentoo-user] Dolpin and panel won't lock correctly

2024-08-06 Thread Frank Steinmetzger
Am Tue, Aug 06, 2024 at 06:02:47PM -0500 schrieb Dale:

> > The /mnt or any other /* directories will show up both when you click the / 
> > partition Label on the 'Devices' side panel section and when you click Root 
> > in 
> > the 'Places' section.
> >
> 
> True but I rarely use that panel, pane or whatever it is called.  Once I
> open /home and go a couple layers, the list gets VERY long.  It's so
> long that I tend to use the mouse pointer and the scroll bar to move.  A
> mouse wheel scroll would take a long time.

You can hold the shift key while turning the wheel, this will cause it to 
scroll by page, not by three lines at a time. That’s a very nifty feature 
since at least KDE 3 (that was when I started my “Linux career”).

> > It seems the top directory in the Folders side panel is dynamically 
> > adjusted 
> > when you decent through its subdirectories.  Have you played with:
> >
> > Settings > Configure Dolphin > Startup > Show on startup
> >
> > to see if you can set / in there?  I don't know if this might affect it 
> > permanently, rather than on startup, but its worth trying.
> >
> > Alternatively, is the dolphin window and size of fonts/icons as large in 
> > this 
> > desktop as it was in your old desktop?
> >
> 
> I tried that too.  It still does the /home/dale thing when I go into my
> home directory.  There has to be a setting somewhere since it works on
> my old rig.  I just can't find it.  I may boot the old rig and compare. 
> Maybe when I enable/disable something some new settings show up somewhere. 

First I wanted to write that I don’t really believe there is an option for 
this. So you could simply go to home (with Alt+Home) and one level up with 
Alt+Cursor, so the folder panel goes back to root mode.

And then I had one more look and simply clicked around a bit. I’ve never 
really used the folder panel; I usually stick to the Places panel. I noticed 
a suspicios menu item … and Heureka! Just right-click any folder in the 
panel and uncheck “Limit to home directory”.

> If only.  I'd have to connect everything back up to the old rig so I can
> boot.  Right now, nothing plugged into it.  I need to do some things on
> it tho.  Drag out the old keyboard, spare mouse and various other
> things.  H, where am I going to put a monitor tho?

You could try headless with SSH, if it’s just moving files around that needs 
to be done.

> >> Anyone have a idea how to make Dolphin lock the panel at / and it not
> >> switch to /home/dale view??  I'd think locking is enough but maybe the
> >> lock setting isn't working and it is a bug.

The “lock setting” is for locking the sidebar layout, not its content.

> >> P. S.  Liking the new rig.  Sometimes my main monitor blinks off and
> >> back on again.  I can't figure out why yet but I'm working on it. 
> >> Swapped cables so far.  Waiting. Waiting. Waiting.  :-) 
> > Probably some power Energy Star® saving feature on the monitor?  :-/
> 
> 
> Well, it does it when I'm actually doing things like typing a email or
> something.  Plus, most energy saving stuff is disabled.  It has to stay
> on so I can watch TV anyway.

Media players usually inhibit the suspend signal to the monitor anyways.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

What were peanuts used for before there was TV?


signature.asc
Description: PGP signature


Re: [gentoo-user] Web browser issues. Firefox and Seamonkey doesn't work, Chrome does.

2024-08-03 Thread Frank Steinmetzger
Am Sat, Aug 03, 2024 at 12:15:55PM -0500 schrieb Dale:

> >> :-)  :-) 
> > There is Seamonkey documentation, but there are loads of how to's for 
> > Mozilla 
> > products.  If Seamonkey is mostly the same as Firefox/Thunderbird, you can 
> > take look at the Thunderbird resources to find out how to set up Seamonkey 
> > to 
> > behave as you want it.
> 
> Well, what I'd like to do, install a email program that fetches the
> emails and then stores them on my system.  Then I can have Thunderbird
> or any other email program connect to that and view, create, send or
> whatever emails.

So you want an IMAP server, then?

> Thing is, setting up the first program is complicated.

Indee-diddly-doo.
What I do: sync mail from my main IMAP account to a local maildir structure 
using offlineimap. Then I can access it with mutt or any other program that 
speaks maildir, read it, move it around, delete it. Those actions are 
applied to the server by offlineimap as well. I use mutt for that most of 
the time.

I also use KMail as graphical mail client, but that is completely separate 
from the offlineimap-mutt setup and it uses its own offline cache.

Sending mail away is set up individually in each client (mutt/kmail) and 
they talk to my provider’s SMTP directly.

This setup has limited flexibility in that you need to sync manually. A 
local imap server would allow for many clients to talk to it at the same 
time in real-time. But I don’t see this as a requirement for you since you 
only have one client, basically.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Feed your children garlic, then you will find them in the dark.


signature.asc
Description: PGP signature


Re: [gentoo-user] New monitor, new problem. Everything LARGE O_O

2024-07-21 Thread Frank Steinmetzger
Oopsie, I found this mail in my drafts folder just now, where it’s been 
sitting since the ninth. Perhaps I had to pause writing, but now I can’t 
remember anymore. So I’ll just send it off. ;-)


Am Tue, Jul 09, 2024 at 12:02:47AM +0100 schrieb Michael:

> On Monday, 8 July 2024 21:21:19 BST Frank Steinmetzger wrote:
> > Am Mon, Jul 08, 2024 at 06:26:26PM +0100 schrieb Michael:
> > > Back to the previous topic, I have not yet found a case where changing the
> > > scale by means of the desktop settings, arrives at non-blurred fonts.  The
> > > clearest sharpest fonts are always rendered at the native monitor
> > > resolution, at a 100% scale setting.  Am I missing a trick, or is this to
> > > be expected?
> > That doesn’t really make sense. Fonts are always rendered natively, no
> > matter what size. Except if they are really rendered at 100 % and then the
> > rendered bitmap is scaled by the GPU or somesuch.
> > 
> > Or because their hinting information is limited to a certain size range.
> > This info gives the renderer special knowledge on how to render the glyphs.
> > 
> > Do you have screenshots?
> 
> I attach two screenshots one at 100% and one at 90%.  When viewed on the 
> 1366x768 actual monitor they are worse than what the screenshots have 
> captured.  Perhaps I need to take a photo of the monitor.  Anyway, if you 
> view 
> it on a 1920x1080 monitor you should hopefully see the difference.  The font 
> DPI is 96.

I can see it. I use 2560×1440, but viewing an image pixel-perfect is not 
dependent on the screen’s resolution per se, but on it being run at its 
native resolution. So that one pixel in the image is actually displayed by 
one pixel on the screen without any scaling-induced blurring.

I have no real explanation for the fonts. Do they also get blurry at scales 
bigger than 100 %? The only thing I can say is that I use a font setting of 
slight hinting with no RGB subpixel rendering. The latter means that I don’t 
want the coloured fringes, but prefer greyscale aliasing instead. See my 
screenshot. 96 dpi (100 % scaling), main fonts set to 11 pt.

I used to use full hinting in my early (KDE 3) days, which gives me sharp 
1-pixel-lines, because I was used to the crisp look of non-aliased fonts on 
Windows. But for many years now I’ve been using only slight hinting, so the 
font looks more “real-worldy”, natural and not as computer-clean. I think 
that’s something I picked up during the few times I looked at a mac screen 
or screenshot (I’ve never sat at one for a longer time myself).


PS.: Do you really still use KDE 4 or is it just Oxygen on Plasma 5? I kept 
using Oxygen Icons in Plasma 5. But more and more icons are not updated, so 
I get wrong icons or placeholders, so I bit the bullet and switched to 
breeze. :-/
On second thought, I think I can answer that myself, because the blurred 
icons give it away. With Plasma 6, the global scaling not only affects fonts 
but also the entire UI. I wish this could be disabled, because that is the 
actual reason why I can’t keep on using custom DPI setting any longer. The 
UI just becomes ugly with far too much spacing and those blurry icons.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Imagine it’s spring time and no tree plays along.


signature.asc
Description: PGP signature


Re: [gentoo-user] New monitor, new problem. Everything LARGE O_O

2024-07-08 Thread Frank Steinmetzger
Am Mon, Jul 08, 2024 at 06:26:26PM +0100 schrieb Michael:

> Back to the previous topic, I have not yet found a case where changing the 
> scale by means of the desktop settings, arrives at non-blurred fonts.  The 
> clearest sharpest fonts are always rendered at the native monitor resolution, 
> at a 100% scale setting.  Am I missing a trick, or is this to be expected?

That doesn’t really make sense. Fonts are always rendered natively, no 
matter what size. Except if they are really rendered at 100 % and then the 
rendered bitmap is scaled by the GPU or somesuch.

Or because their hinting information is limited to a certain size range. 
This info gives the renderer special knowledge on how to render the glyphs.

Do you have screenshots?

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

One doesn’t eat salad, one feeds salat to one’s food.


signature.asc
Description: PGP signature


Re: [gentoo-user] New monitor, new problem. Everything LARGE O_O

2024-07-07 Thread Frank Steinmetzger
Am Sun, Jul 07, 2024 at 05:10:18PM -0500 schrieb Dale:

>  It's hi res and a good deal.  :-D 
> >>> Please define hi res. Full HD at 32″ is definitely not hi res. ;-P
> >>> It’s about as much as CRTs back in the day, close to 1024×768 at 17″.
> >> Well, I still consider 1080P hi res.  That's what I get for any monitor
> >> or TV I buy.  The biggest thing I have is a 32" tho.  My rooms are kinda
> >> small.  No need for a 60" TV/monitor. 
> > Well my TV sits over 4 m (that’s 13 feet for the imperialists) away from 
> > the 
> > sofa. So I splurged and got myself a 65″ one.
> 
> Well, I saw on a website once where it gave info on distance, monitor
> size and what you are watching can factor in too.  It claimed that a 32"
> is the ideal size for my room.  Given my old eyes tho, a 42" might serve
> me better.  Thing is, I'm bad to watch old videos from the 80's, 70's
> and even 60's.  Most of those are 480P or if lucky, just a little higher
> resolution.  With those, monitor size can make videos worse.

This websites’s goal probably was about covering your eyes’ natural field of 
view. Sitting at my desk, my 27 inch monitor appears only slight smaller 
than my 65 inch TV 4 m away. Watching 50s TV shows will be the same 
experience on both in those situations.

If you want to fill that entire field of view with details, then naturally, 
a 50s TV show in 480p won’t suffice. The more of your viewing arc you want 
to cover, the more picture resolution you need. You basically want to map
X amount of pixels on each degree of viewing arc. Physical units are great.

It also goes into the other direction: people these days™ watch 4K movies on 
their phones. Why, just why? Even if the screen can display it physically, 
their eyes cannot resolve that fine detail, because the pixels are too small.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

How do you recognise a male hedgehog?
It has one more spine.


signature.asc
Description: PGP signature


Re: [gentoo-user] New monitor, new problem. Everything LARGE O_O

2024-07-07 Thread Frank Steinmetzger
Am Sun, Jul 07, 2024 at 02:06:04PM -0700 schrieb Mark Knecht:
> On Sun, Jul 7, 2024 at 1:09 PM Frank Steinmetzger  wrote:
> >
> > Am Sat, Jul 06, 2024 at 07:32:49PM -0500 schrieb Dale:
> 
> >
> > Well don’t mix up frame rate and scaling. 75 Hz vs. 60 is quite subtle,
> you
> > might not even notice 90 Hz. But changing DPI from 80 to 70 will mean an
> > increase in fonts by 14 %.
> 
> So I understand the 14% calculation, but help me understand the underlying
> technology. Is the DPI how a font file, which I presume is some fixed size,
> like 25x25, gets scaled onto the screen? I'm not clear about the conversion
> from the font to the number of dots used to draw the font on the screen.

Yeah. So, big convoluted topic. ^^

First, there is the physical pixel raster of the screen, which determines 
the PPI value. But what may confuse people without knowing (I was very 
confused in my early computing days when I was using Windows): font sizes 
and their units. People usually think in pixels, but font sizes are given in 
point, especially on modern Linux desktops. Historically, Points come from 
lead typesetting, where 1 pt = 1/72 inch. And monitors of early publishing 
machines (and I think at the time in general) all had 72 ppi, so if you have 
a font size of 12 pt == 1/6 in == 4,233 mm on your screen, it will be 
exactly the same size on the printed paper. No scaling necessary.

I forgot some of the minutiae over time; AFAIR Windows 9x+ assumed a standard 
density of 96 ppi and font sizes were set up in pixels in the control panel. 
The monitor market was very homogeneous, there was not much diversity, so no 
need for scaling factors. The default in Windows 2000 and XP was Tahoma at 8 
pixel. And it was the same on Pocket PCs (PDAs with 3″ touch screens of 
240×320). So if you took a screenshot on all of those screens, the font was 
identical to the pixel.

No comes the clash between the logical and the physical world. Today we have
- high-density screens like tablets and laptops: 4K at 14″ equals 315 ppi
- the standard cheap office screen of 1900×1200 at 24″ equals 94 ppi
- my 8 years old Thinkpad with FullHD at 12.5″ and 176 ppi

A text of size 12 pixel will always be 12 pixels high, so it will appear 
smaller to the eye when the pixels are small, and bigger when the pixels are 
big.

OTOH, a text at 12 pt should be displayed physically (in millimeters or 
inches on the screen) at the same size no matter how fine a screen resolves 
an image. So the computer needs to know how many pixels it needs to reach 
that size. That’s where the ppi come in:

   font size in pt
Number of pixels = --- * Screens density in pixel/in
 1/96 pt/in

The first factor gives you the font’s physical dimension in inch, the second 
factor converts that into pixel height. The units all cancel each other out 
with pixels remaining.

That’s why you can enter the screen’s ppi into the settings (or use it 
automatically, if possible). So the font size you set will be the same to 
your eye no matter what monitor you plug in. The scaling factor business 
hides that: 100 % means 96 ppi, 200 % means 192 ppi.

This produces two “Unfortunately”s:

Unfortunately 1: people don’t know what the scaling means and how it works 
physically.

Unfortunately 2: UI developers stick to this scaling factor idea. Everything 
outside certain values (meaning integer multiples of 96) looks ugly. But 
none of my screens have a ppi of n * 96. They are all inbetween (117, 176, 
216) and when I set the correct scaling, the Plasma UI becomes ugly as hell 
because the previously nice-looking pixel-perfect lines become blurred or 
their thickness varies depending on where on the screen they are drawn.


> > I’m confused. I thought the new one has already arrived and is the one
> where everything was HUGE. %-)
> 
> Dale does this at times and I get confused also. He will (the way I read the
> messages) sometimes be talking about different machines or different
> monitors. His 'main rig", his "new rig", etc.

We could stick to hostnames. *ducksandruns*

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

It’s a pity that at the end of the money there’s so much month left.


signature.asc
Description: PGP signature


Re: [gentoo-user] New monitor, new problem. Everything LARGE O_O

2024-07-07 Thread Frank Steinmetzger
Am Sun, Jul 07, 2024 at 04:12:11PM -0500 schrieb Dale:

> >> It's hi res and a good deal.  :-D 
> > Please define hi res. Full HD at 32″ is definitely not hi res. ;-P
> > It’s about as much as CRTs back in the day, close to 1024×768 at 17″.
> 
> Well, I still consider 1080P hi res.  That's what I get for any monitor
> or TV I buy.  The biggest thing I have is a 32" tho.  My rooms are kinda
> small.  No need for a 60" TV/monitor. 

Well my TV sits over 4 m (that’s 13 feet for the imperialists) away from the 
sofa. So I splurged and got myself a 65″ one.

> >> Now to go eat supper.  I hope the monitor comes in soon.
> > I’m confused. I thought the new one has already arrived and is the one 
> > where 
> > everything was HUGE. %-)
> 
> I ordered a second identical monitor.  I been wanting two monitors for a
> while.  On occasion when I have hundreds of files to process manually, I
> need a second monitor just to stick a file manager on and drag files
> from one directory to another but being able to see both at the same
> time.

I’ve never grown accustomed to multi-monitor-setups. I’ve always used just 
one (a habit from my long laptop days). Instead I multitask with virtual 
desktops (as you do) and with teminal multiplexers.

At 70 DPI, I recommend the terminus font, a bitmap font which is very 
readable at small sizes and allows you to get lots of information on the 
screen.

>   A second monitor will help with this.  Plus, I have a spare as
> well.  So, first monitor is here and fixed a lot of problems except it
> added a new one, being HUGE.  Now that is fixed as well.  When I connect
> the second monitor, I should be able to set it up the same way except
> connected to a different port.

Considering that you have a thread about GPU temps going, be warned: GPUs 
tend to suck a lot more power when running in multi-head setups.

> The biggest thing I dread right now, cleaning off my desk.  -_o

A clean desk is just a sign for a messy drawer.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

“A computer is like air conditioning:
it becomes useless when you open Windows.” – Linus Torvalds


signature.asc
Description: PGP signature


Re: [gentoo-user] New monitor, new problem. Everything LARGE O_O

2024-07-07 Thread Frank Steinmetzger
Am Sat, Jul 06, 2024 at 07:32:49PM -0500 schrieb Dale:

> Michael wrote:
> > On Saturday, 6 July 2024 17:11:23 BST Dale wrote:
> >> Michael wrote:
> >>> On Saturday, 6 July 2024 10:59:30 BST Dale wrote:

>  Now the monitor on my main rig is a bit older too.  Maybe 6 or 7
>  years???  Should newer monitors be set to a higher number for DPI?

DPI does not depend on age, but only on physical characteristics, of course. 

> >>> Strictly speaking, the pixel density of an on-screen digital image is
> >>> referred to as Pixels Per Inch (PPI), but the term DPI which refers to a
> >>> printed image of ink Dots Per Inch has stuck.
> >>>
> >>> In addition, there is the physical pixel density of your monitor and the
> >>> rendered pixel density of the X11 image(s).  Tweaking the latter allows
> >>> you to scale the display and make images look larger than the native
> >>> monitor resolution.

> > Is this your monitor?
> >
> > https://www.samsung.com/us/business/computing/monitors/flat/32--s30b-fhd-75hz-amd-freesync-monitor-ls32b300nwnxgo/#specs
> >
> > If the screen is 27.5" wide and 15.47 high, then at a native 1,920 x 1,080 
> > pixel resolution the DPI would be approx. 70x70.  However, if you're happy 
> > with the way it looks @80x80, then that's a good setting.  After all, 
> > you're 
> > the one looking at it!  :-)
> 
> 
> Actually, mine is a LS32B304NWN.  I'm not sure what the difference is
> between 300 and 304.  There may just be a minor version change but
> display is the same.

If I look at the Samsung pages:
https://www.samsung.com/us/computing/monitors/flat/32--s30b-fhd-75hz-amd-freesync-monitor-ls32b300nwnxgo/
https://www.samsung.com/us/business/computing/monitors/flat/32--s30b-series-with-dp-cable-ls32b304nwnxgo/
then the difference is in the caption: the 304 comes with a DP cable.

> It's hi res and a good deal.  :-D 

Please define hi res. Full HD at 32″ is definitely not hi res. ;-P
It’s about as much as CRTs back in the day, close to 1024×768 at 17″.

> Compared to the HUGE display, yea, it looks good.  The reason I was
> asking if that is correct is this, maybe it should be set to, just
> guessing, 128 x 128 but some other setting makes the picture the right
> size, not HUGE.  If 70 x 70 or 80 x 80 is a setting that the monitor is
> designed for and ideal, then that is fine.

Well technically, a monitor is not designed for, but designed with a 
specific number. It is determined by the size of its physical pixels.

> Monitors, even the old CRTs, have resolutions and settings they work
> best at.

True, at bigger pictures (meaning more pixels), the frame rate went down and 
the CRT started to visibly flicker. So the sweet spot was at the highest 
resolution for which a comfortably high framerate could be maintained. I was 
too little in the CRT era to know the exact reason, but there are many to 
choose from:
- insufficient GPU power to deliver enough pixels per second
- limited bandwidth in the display cable
- the monitor couldn’t keep up
- the CRT’s pixel pitch in the phosphor screen

> I read once where a
> person had a monitor that had a great picture at 60Hz refresh.  Even tho
> it would work at 75Hz, the picture wasn't as good.  It seems that
> something didn't like that 75Hz setting.  That person used the 60Hz
> setting.  Some things are picky that way.  Higher isn't always better.

How long ago was that? If it was in the VGA era, maybe the analog circuits 
weren’t good enough and produced a bad signal.

> I may try that 70 setting.  Odds are, just like the difference between
> 60 and 75Hz refresh rate, I likely won't be able to tell the
> difference.  Time will tell tho. 

Well don’t mix up frame rate and scaling. 75 Hz vs. 60 is quite sublte, you 
might not even notice 90 Hz. But chaing DPI from 80 to 70 will mean an 
increase in fonts by 14 %.

> By the way, I booted the rig up when I went to heat up supper and was
> downloading new messages.  It booted to a normal screen.  I think it is
> at least being consistent now.  Before, it was hit or miss, mostly
> miss.  Given how good things are at just working, I'm surprised that the
> correct setting wasn't used automatically.  I'd think it should be. 
> Maybe that is a bug 
> 
> Now to go eat supper.  I hope the monitor comes in soon.

I’m confused. I thought the new one has already arrived and is the one where 
everything was HUGE. %-)

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

People who are not convex with foreign words should not renovate with them.


signature.asc
Description: PGP signature


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-16 Thread Frank Steinmetzger
Am Sat, Jun 15, 2024 at 04:07:28PM -0700 schrieb Mark Knecht:

>Now, the fun part. I wrote you a little Python program which on
> my system is called Dales_Loop.py. This program has 3
> parameters - a value to count to, the number of cores to be used,
> and a timeout value to stop the program. Using a program like
> this can give you repeatable results.

FYI, there is a problem with your approach: python is not capable of true 
multiprocessing. While you can have multiple threads in your programm, in 
the end they are executed by a single thread in a time-sharing manner.

This problem is known as the GIL—the Global Interpreter Lock. Unless you use 
an external program to do the actual CPU work, i.e. let the linux kernel do 
the actual parallelism and not python, your program is not faster than doing 
everything in a single loop.

See this page for a nice example which does basically the same as your 
program (heading “The Impact on Multi-Threaded Python Programs”), including 
some comparative benchmarking between single loop and threaded loops:
https://realpython.com/python-gil/

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

At night, when everybody sleeps, usually there’s nobody awake.


signature.asc
Description: PGP signature


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-10 Thread Frank Steinmetzger
Am Sun, Jun 09, 2024 at 05:13:46PM -0500 schrieb Dale:

> Frank Steinmetzger wrote:
> > Am Samstag, 8. Juni 2024, 19:46:05 MESZ schrieb Dale:
> >
> >>>> Sadly, the CPU I got is for processing only, no video support it says.
> >>> So you got an F model?
> >> I got the X model.  It's supposed to be a wttle bit faster.  o_O
> > Well as we have been mentioning several times by now: starting with AM5 
> > CPUs, 
> > the X models all have integrated graphics. Where does what say “no video 
> > support” and in which context?
> >
> 
> 
> I found it on the AMD website.  The CPU I got is a AM4.  Linky.
> 
> https://www.amd.com/en/products/processors/desktops/ryzen/5000-series/amd-ryzen-7-5800x.html
> 
> From that:
> 
> Graphics Model        Discrete Graphics Card Required
> 
> I think the G model has graphics, but gives up a little speed.  Or as I
> put it above, a wiiile bit.  LOL 

Oky, are you sure? The 5800 is an AM4 CPU. Up until now in this thread, 
you were talking of AM5 (i.e. 7000-series CPU with a 600-series Mobo 
chipset and DDR5). The 5800X does not fit into this at all. Just sayin’.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Sugar is what gives coffee a sour taste, if you forget to put it in.


signature.asc
Description: PGP signature


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-09 Thread Frank Steinmetzger
Am Samstag, 8. Juni 2024, 15:40:48 MESZ schrieb Meowie Gamer:

> vim has a WHAT?! You gotta tell me how to use that.

Digraphs are graphs (i.e. characters) that are entered using two other 
characters. Basically it’s the same principle as the X11 compose key, but 
specific to vim. If you enter :dig[raph], you get a list of all defined such 
digraphs. The output of the ga command (print ascii info) includes the digraph 
combo, if one exists for the highlighted character. The unicode plugin behaves 
similarly. You can also define your own.

The feature is used in insert mode and triggered with , after which you 
press the two characters.

For example, there are predefined digraphs for Cyrillic, Greek and Japanese 
Hiragana and Katakana. And you can paint boxes easily thanks to the mnemonics 
involved. A lowercase character denotes a thin line, an uppercase a thick 
line. The characters themselves signify the “direction” of the line; u=up, 
r=right, l=left, d=down, v=vertical, h=horizontal. So to paint a thin vertical 
light with a thick line branching to the right, you press vR: ┝.

See :help dig for more.
-- 
“Privacy laws are our biggest impediment to us obtaining our
objectives.” — Michael Eisner, CEO of Disney, 2001





Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-09 Thread Frank Steinmetzger
Am Samstag, 8. Juni 2024, 19:46:05 MESZ schrieb Dale:

> >> Sadly, the CPU I got is for processing only, no video support it says.
> > 
> > So you got an F model?
> 
> I got the X model.  It's supposed to be a wttle bit faster.  o_O

Well as we have been mentioning several times by now: starting with AM5 CPUs, 
the X models all have integrated graphics. Where does what say “no video 
support” and in which context?

-- 
An empty head is easier to nod with.






Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-08 Thread Frank Steinmetzger
Am Samstag, 8. Juni 2024, 04:05:17 MESZ schrieb Dale:


> > DisplayPort supports daisy-chaining. So if you do get another monitor some
> > day, look for one that has this feature and you can drive two monitors
> > with
> > one port on the PC.
> 
> That's something I didn't know.  I wondered why they had that when a
> HDMI port is about the same size and can handle about the same
> resolution.  It has abilities HDMI doesn't.  Neat.  :-D

Polemically speaking, HDMI is designed for the concerns of the MAFIA (Music 
and Film Industry of America) with stuff like DRM. DisplayPort is technically 
the better protocol, for example with more bandwidth and it is open. There was 
news recently that the HDMI forum would not allow AMD to implement HDMI 2.1 in 
its open source driver, which means no 4K 120 Hz for Linux users.

> Sadly, the CPU I got is for processing only, no video support it says.

So you got an F model?

> > But what I also just remembered: only the ×16 GPU slot and the primary M.2
> > slots (which are often one gen faster than the other M.2 slots) are
> > connected to the CPU via dedicated links. All other PCIe slots are behind
> > the chipset. And that in turn is connected to the CPU via a PCIe 4.0×4
> > link. This is probably the technical reason why there are so few boards
> > with slots wider than ×4 – there is just no way to make use of them,
> > because they all most go through that ×4 bottleneck to the CPU.
> > 
> > ┌───┐ 5.0×4 ┌───┐ 4.0×4 ┌─┐   ┌───┐
> > │M.2┝===┥CPU┝━━━┥ Chipset ┝━━━┥M.2│
> > └───┘   └─┰─┘   └─┰─┰─┘   └───┘
> > 
> > 5.0×16┃   ┃ ┃
> > 
> > ┌─┸─┐┌┸─┐ ┌─┸┐
> > │GPU││PCIe 1│ │PCIe 2│
> > └───┘└──┘ └──┘
> > […]
> 
> Nice block diagram.  You use software to make that?

Yes, vim’s builtin digraph feature. O:-)

-- 
Team work:
Everyone does what he wants, nobody does what he should, and all play along.


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-06 Thread Frank Steinmetzger
Am Tue, Jun 04, 2024 at 05:49:31AM -0400 schrieb Rich Freeman:

> On Tue, Jun 4, 2024 at 2:44 AM Dale  wrote:
> >
> > I did some more digging.  It seems that all the LSI SAS cards I found
> > need a PCIe x8 slot.  The only slot available is the one intended for
> > video.
> 
> The board you linked has 2 4x slots that are physically 16x, so the
> card should work fine in those, just at 4x speed.

I can never remember the available throughput for each generation. So I 
think about my own board: it as a 2.0×2 NVMe slot that gives me 1 GB/s 
theoretical bandwidth. So if you have 3.0×4, that is twice the lanes and 
twice the BW/lane, which yields 4 GB/s gross throughput. If you attach 
spinning rust to that, you’d need around 15 to 20 HDDs to saturate that 
link. So I wouldn’t worry too much about underperformance.

> > I'd rather not
> > use it on the new build because I've thought about having another
> > monitor added for desktop use so I would need three ports at least.

DisplayPort supports daisy-chaining. So if you do get another monitor some 
day, look for one that has this feature and you can drive two monitors with 
one port on the PC.

> > The little SATA controllers I currently use tend to only need PCIe x1.
> > That is slower but at least it works.

PCIe 3.0×1 is still fast enough for four HDDs at full speed. You may get 
saturation at the outermost tracks, but how often does that happen anyways?
I can think of two cases that produce enough I/O for that:
- copy stuff from one internal RAID to another
  (you use LVM, does that support striping to distribute I/O?)
- a RAID scrub

Everything else involves two disks at most—when you copy stuff from one to 
another. Getting data into the system is limited by the network which is far 
slower than PCIe. And a full SMART test does not use the data bus at all.


But what I also just remembered: only the ×16 GPU slot and the primary M.2 
slots (which are often one gen faster than the other M.2 slots) are 
connected to the CPU via dedicated links. All other PCIe slots are behind 
the chipset. And that in turn is connected to the CPU via a PCIe 4.0×4 link. 
This is probably the technical reason why there are so few boards with slots 
wider than ×4 – there is just no way to make use of them, because they all 
most go through that ×4 bottleneck to the CPU.

┌───┐ 5.0×4 ┌───┐ 4.0×4 ┌─┐   ┌───┐
│M.2┝===┥CPU┝━━━┥ Chipset ┝━━━┥M.2│
└───┘   └─┰─┘   └─┰─┰─┘   └───┘
5.0×16┃   ┃ ┃
┌─┸─┐┌┸─┐ ┌─┸┐
│GPU││PCIe 1│ │PCIe 2│
└───┘└──┘ └──┘

Here are block diagrams of AM5 B- and X-chipsets and a more verbose 
explanation:
https://www.anandtech.com/show/17585/amd-zen-4-ryzen-9-7950x-and-ryzen-5-7600x-review-retaking-the-high-end/4

Theoretically, the PCIe controller in the CPU has the ability to split up 
the ×16 GPU link into 2×8 and other subdivisions, but that would cripple the 
GPU, which is the normal use case for such mobos, so the feature is very 
seldomly found.

If I look at all available AM5 mobos that have at least two ×8 slots, there 
are just seven out of 126: https://skinflint.co.uk/?cat=mbam5&xf=19227_2
You can also use the filter to look for boards with 3 ×4 slots.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

“If I could explain it to the average person, I wouldn't have been worth
the Nobel Prize.” – Richard Feynman





Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Frank Steinmetzger
Am Sun, Jun 02, 2024 at 08:27:57AM -0500 schrieb Dale:

> I thought of something on the m.2 thing.  I plan to put my OS on it.  I
> usually use tmpfs and compile in memory anyway but do have some set to
> use spinning rust. Once I get 128GB installed, I should be able to do
> that with all packages anyway but still, I had a question.  Should I put
> the portage work directory on a spinning rust drive to save wear and
> tear on the SSD or have they got to the point now that doesn't matter
> anymore?  I know all the SSD devices have improved a lot since the first
> ones came out. 

We’ve had this topic before. You can do some archaeology with dumpe2fs and 
extrapolate:

$ dumpe2fs -h /dev/mapper/vg-root
...
Filesystem created:   Sun Apr 17 16:47:03 2022
...
Lifetime writes:  877 GB


So that’s around 900 GB in 2 years. This is an Arch system, so may not 
experience quite as many writes from updates (especially not from any 
on-disk emerging), but Arch does have its own share of volume-heavy 
upgrades. Just today, after being away on travel for 11 days, I had to 
download 2.5 GB and unpack over 8 GB of files.


My home partition has accumulated 2600 GB in the same time. Firstly, it’s 
200 GB in size vs. 45 GB for the root system. And secondly, sometimes the 
baloo file extractor runs amok and keeps writing gigabytes of index files. 
It’s an Evo 970 Plus 2 TB, so I just scratched its guaranteed lifetime write 
amount.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

We don’t put a single drop of alcohol on the table...
we pour very cautiously.


signature.asc
Description: PGP signature


Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Frank Steinmetzger
Am Sun, Jun 02, 2024 at 12:38:13AM -0500 schrieb Dale:

> I'll also max the memory out too.  I'm
> unclear on the max memory tho.  One place shows 128GB, hence two 32GB
> sticks.  The out of stock Newegg one claims 256GB, which would be nice. 
> I'm not sure what to think on memory.  Anyway.  If the thing is fast
> enough, I may do the memory first then CPU later.  If I need a faster
> CPU, I may do it first then the memory.

One interesting fact: four sticks run slower than two sticks. I don’t 
remember the exact technical reason, but it is so. Two sticks can run at the 
maximum stock speed (i.e. without overclocking profiles, which is 5200 MT/s 
for the 7600X’s memory controller). But four sticks are clocked lower.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

It’s the same inside as it is outside, just different.


signature.asc
Description: PGP signature


Re: [gentoo-user] Graphics configuration for a Ryzen 7 7700X chip and water cooling.

2024-05-16 Thread Frank Steinmetzger
Am Wed, May 15, 2024 at 07:08:11PM +0100 schrieb Michael:
> Hi Alan,
> 
> On Wednesday, 15 May 2024 15:23:47 BST Alan Mackenzie wrote:
> > Hello, Gentoo.
> > […]
> > So I'm looking at getting an AMD Ryzen 7 7700X processor, and using its
> > inbuilt graphics rather than buying a distinct graphics card.
> > 
> […]
> > As a somewhat tangential question, would it be worthwhile getting water
> > cooling in my new machine?  In particular, to reduce the noise it gives
> > off while building large packages such as clang and rust?  Or is water
> > cooling only sensible for really heavy users such as gamers?
> > 
> > Thanks for the upcoming answers!
> 
> WC will be quieter and more expensive than an after market air cooler.

Are you sure about the noise? First there is the water pump and second, 
the heat from the air cycle needs to get somewhere, which is donw with fans.
So unless you get a big radiator with several fans, you just relocate the 
fan noise inside the case.

I have a 10 years old i5 with a TDP of I think 84 W. On that sits a normal 
(not even high-performance) tower cooler with a single 120 mm fan. At full 
load the CPU draws around 50 W, maybe even less unless you do prime95. So my 
cooler is basically overkill. But this allows the fan to never leave the 
minimum RPM range of ~500…600 1/min and is unaudible even at full load.
However …

> You could invest the money toward more RAM, (more/bigger) case fans, a 
> better PSU, monitor, speakers, a new car, etc.  :-)
> 
> https://www.techreviewer.com/tech-specs/amd-7700x-tdp/
> 
> Cranking up 16 threads to 5.4 GHz will produce some heat, but compiles will 
> complete sooner too.

… the 7000X are hotheads, because they operate way above the efficiency 
sweetspot just to get the longest bar in benchmark diagrams. If you reduce 
the power target¹ in the BIOS, you lose a few percent in performance, but 
get a disproportionately bigger reduction in energy consumption.

¹ The TDP of a 7700X is 105 W. The maximum permanent power draw is TDP * 1.4 
(ish, can’t remember the exact details right now). So if you reduce the 
target to 84 W, you draw a little over 100 W. That’s easy-peasy for a mormal 
120 mm tower cooler. One additional advantage of an air cooler is that it 
also blows air over your mainboard and its power stages. That’s something 
you don’t get with a water loop and need an extra case fan for—IF you keep 
the CPU on high load all the time which causes more heat buildup in the VRMs.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The perfect diet: no breakfast in the morning,
in return forego pudding at lunch and then go to bed without dinner.


signature.asc
Description: PGP signature


Re: [gentoo-user] Encrypted drives, password generation and management howto, guide.

2024-05-14 Thread Frank Steinmetzger
Am Tue, May 14, 2024 at 06:28:17AM -0500 schrieb Dale:
> Howdy,
> […]
> remember either, or write notes to remember them.  I also wanted to
> avoid the desktop copy and paste, or clipboard, mechanism.  I'm not sure
> how that data is stored in the clipboard and how good it is at erasing
> it when I clear it.

The mark-and-middleclick you describe further down is the very same as the 
“normal” clipboard. It is just accessed differently.

> First, I needed to generate a password.  I googled, a lot.  I had
> trouble finding a way to generate the type of passwords I wanted but I
> finally found one.

Care to elaborate regarding the “password you wanted”? There is the obvious 
pwgen, which can generate passwords with given character sets and length. 
Keepass can do this, too, so I assume, Bitwarden (which you use) has a 
similar function.

And if you don’t like parts of the generated PW, keep the part you like, 
generate new and pick the part you like again. Or just let pwgen generate a 
big bunch and pick what you like best from the output.

> […]
> Now that I have a password, how do I keep track of them?  I did some
> more searching.  I wanted something that was command line not GUI. 
> After all, I have BitWarden for websites and such already.  Thing is,
> it's GUI since it is a Firefox add-on.  I'd need to use the clipboard to
> copy and paste.  I want to avoid that remember?  I also wanted something
> that is on its own, separate from my main password tool BitWarden.  I
> found kpcli in the tree.

I didn’t know about kpcli and it is not available in Arch. So I looked it 
up. Turns out it is a non-graphical Keepass client (that’s what the kp 
stands for, after all).

Interestingly, there is also a bitwarden CLI client.

Did you know Keepass (the graphical one) has an autotype feature? This means 
that it simulates the pressing of keys, so it bypasses the clipboard 
entirely. Another advantage of that is that you can set up custom key 
sequences in the autotype field, so you can for example say “first enter the 
username, then press enter, then wait for a second, then enter the password 
and press enter again.” Useful for sites that use a dynamic login screen 
with animations or non-standard input fields.

> Then I needed some way to handle if the password file kpcli uses got
> lost or damaged.  If I were to lose that file, all drives and the data
> on them is lost.  I'd lose everything because there is no way to
> remember the password.

The obvious answer is: backup – encrypted or not. ;-)
My Keepass database is a simple file in my home that is backed up together 
with all the other home files by Borg. Meaning I even have a versioned 
backup of my passwords. Needless to say my backup drives are LUKSed with a 
long passphrase that I have never ever once written down anywhere on paper. 
I’ve been using it for so long now and on several drives, that it is 
ingrained in my brain.

> The kpcli file itself appears to be encrypted. 
> So, it protects itself.  That's good.  I don't need to put the file on
> something that is also encrypted, just copy it to a plain file system as
> it is.  I have a USB stick that I store things on.  Things like drive
> info, what drives go to what volume group, what drive has the OS on it
> etc and the portage world file on it.  I also have some scripts in /root
> that I don't want to lose either so I copy them to the stick as well. 

Be mindful that USB sticks aren’t very reliable. The flash chips in them are 
what is left after quality control deemed them unfit for duty in SSDs (first 
tier) and memory cards (second tier). So always keep several copies, 
possibly on different types of storage media (HDDs, SSDs, optical, whatever).

> Then one important file, my file that contains frequently used
> commands.  It is rather lengthy and is 15 years or more of additions.  I
> copied all that info to a USB stick.  It lives in the fire safe.

TBH, I wouldn’t put all my horses on one USB stick in a fire safe. (Or 
however the saying goes) After a flimsy USB stick with questionable flash 
chips has been subjected to high temperatures for a longer time, chances are 
you may not be able to access its data ever again.

> How I use all this.  I do this in a Konsole, within KDE, which has
> tabs.  Might work on a plain console to tho.  If I need to open a
> encrypted drive, or set of drives, I open kpcli and get it to show the
> password for that drive in one tab.  I then run the little script to
> open and mount that drive in another tab.  When it asks for the
> password, I highlight the password from kpcli tab and then switch tabs
> and middle click to paste the password in.

Since you’ve already scripted most of it, you could possible go the full 
way. Use the HDD’s UUID as key and either store the password in a file that 
is named with the UUID, or in keepass with the UUID as entry title. Then you 
can let the script retrieve the password all by itself without any need for 
copy-pasting – 

Re: [gentoo-user] cross-compiling environment for Rapberry

2024-05-01 Thread Frank Steinmetzger
Am Mon, Apr 29, 2024 at 11:26:49PM +0200 schrieb ralfconn:
> Hello,
> 
> I recently got me a Raspberry Pi4b to use as a PiHole [1]. As a first step I
> put user-space Gentoo (i.e. aarch64 stage3) on it and now I am trying to set
> up my desktop to cross-compile binary packages for the PI, to keep the Pi
> up-to-date in reasonable computing time.
> [...]
> /usr/aarch64-unknown-linux-gnu/var/db/pkg on the desktop. I ran emerge
> --sync on the Pi and on the desktop approximatively at the same time.

You could simply rsync the portage tree from one host to the other to get an 
identical state on both.

> [different emerge -u on both hosts]
>
>  What am I missing to make the desktop use the same list of packages to
> be upgraded as the Pi? 

Do you have the same use flags? I’ve never really dealt with crosscompiling. 
Maybe somewhere deep in the system the use flags are changed. Try comparing 
the output of emerge --info from both hosts.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Development aid is to give money from the poor of rich countries
to the rich of poor countries.


signature.asc
Description: PGP signature


Re: [gentoo-user] NAS box and switching from Phenom II X6 1090T to FX-6300

2024-04-18 Thread Frank Steinmetzger
Am Thu, Apr 18, 2024 at 07:26:30AM -0500 schrieb Dale:

>  If you don't play games, then definitely get integrated graphics.
> > I'd add to this, you could still play many games, especially older games 
> > using 
> > a modern APU.  The integrated graphics capability is broadly comparable 
> > with 
> > the entry level discrete GPUs.  For driving a couple of monitors and 
> > watching 
> > videos an APU is more than adequate, saves money on a graphics card and 
> > consumes less power.
> >
> 
> The biggest reason I like a separate video card, I can upgrade if
> needed.

If you don’t play (big) games, then there is no reason to upgrade (except 
if you plan on working with AI stuff).

> Built in video means a new mobo.

No, a new CPU. The mobo only provides the lanes from the iGPU to the 
connectors on the back. The only constraint imposed by the motherboard may 
be an older version of the display link, like DisplayPort 1.2 instead of 
1.4. Only the latter supports 4K @ 120 Hz, the former tops out at 60 Hz.

> I'd suspect even the wimpiest video card would do what I need.

In that case, every iGPU would do what you need. 🤡 The only exception may 
be some hot new video hardware encoder. RDNA2, as can be found in Ryzen 
7000s, now supports AV1 decoding, which was still lacking in the 5000s.

-- 
Grüße | Greetings | Salut | Qapla’
“Meow”   “Woof”Jeez, it’s really raining today!


signature.asc
Description: PGP signature


Re: [gentoo-user] NAS box and switching from Phenom II X6 1090T to FX-6300

2024-04-17 Thread Frank Steinmetzger
Am Wed, Apr 17, 2024 at 01:18:39PM -0400 schrieb Rich Freeman:
> On Wed, Apr 17, 2024 at 9:33 AM Dale  wrote:
> >
> > Rich Freeman wrote:
> >
> > > All AM5 CPUs have GPUs, but in general motherboards with video outputs
> > > do not require the CPU to have a GPU built in.  The ports just don't
> > > do anything if this is lacking, and you would need a dedicated GPU.
> > >
> >
> > OK.  I read that a few times.  If I want to use the onboard video I have
> > to have a certain CPU that supports it?  Do those have something so I
> > know which is which?  Or do I read that as all the CPUs support onboard
> > video but if one plugs in a video card, that part of the CPU isn't
> > used?  The last one makes more sense but asking to be sure.
> 
> To use onboard graphics, you need a motherboard that supports it, and
> a CPU that supports it.  I believe that internal graphics and an
> external GPU card can both be used at the same time.  Note that
> internal graphics solutions typically steal some RAM from other system
> use, while an external GPU will have its own dedicated RAM (and those
> can also make use of internal RAM too).

You can usually set the amount of graphics memory in the BIOS, depending on 
your need and RAM budget.

> The 7600X has a built-in RDNA2 GPU.   All the original Ryzen zen4 CPUs
> had GPU support, but it looks like they JUST announced a new line of
> consumer zen4 CPUs that don't have it - they all end in an F right
> now.

Yup.
G-series: big graphics for games n stuff, over 3 GFlops
F-Series: no graphics at all
rest: small graphics (around 0.8 GFlops max), ample for desktops and media

X-Series: high performance
non-X: same as X, but with lower frequencies

The X series are boosted to higher frequencies which give you a bit more 
performance, but at the cost of disproportionally increased power 
consumption and thus heat. They are simply run above the sweet spot in order 
to get the longest bargraph in benchmarks. You can “simulate” a non-X by 
running an X at a lower power target which can be set in the BIOS. In fact 
once I have a Ryzen, I thing I might limit its frequency to a bit below 
maximum just to avoid this inefficient region.

But I’ll be buying a G anyways. Its architecture is different, as it is 
basically a mobile chip in a desktop package.

As to the qestion about 5/7/9 in the other mail: it’s just a tier number. 
The more interesting is the 4-digit number. 600s and below are 6-core chips, 
700 and 800 have 8 cores, 900s have 12 cores or more.

The thousands give away the generation. AM5 is denoted by 7xxx. (Though 
there is another numbering scheme that does it quite differently, like 
7845H.)

> In any case, if you google the CPU you're looking at it will tell you
> if it supports integrated graphics.

I also recommend Wikipedia. It has tables of all kinds of stuff. Including 
all processors and their core features.
https://en.wikipedia.org/wiki/Zen_4
https://en.wikipedia.org/wiki/List_of_AMD_Ryzen_processors

> If you don't play games, then definitely get integrated graphics.
> Even if the CPU costs a tiny bit more, it will give you a free empty
> 16x PCIe slot at whatever speed the CPU supports (v5 in this case -
> which is as good as you can get right now).

Not to mention a cut in power draw.

> > I might add, simply right clicking on the desktop can take sometimes 20
> > or 30 seconds for the menu to pop up.  Switching from one desktop to
> > another can take several seconds, sometimes 8 or 10.  This rig is
> > getting slower.

Wut. I am running plasma 6 on a Surface Go 1 whose Pentium Gold was slow 
even when it came out. It is half as fast as your 8350 and does not have 
such problems.
Benchmark FX 8350: https://www.cpubenchmark.net/cpu.php?id=1780
Benchmark Pentium Gold: https://www.cpubenchmark.net/cpu.php?id=3300

You have NVidia, right? Did you try the other graphics driver (i.e. 
proprietary ←→ foss)? Do those delays disappear if you disable 3D effects 
with Shift+Alt+F12?


> That sounds like RAM but I couldn't say for sure.  In any case a
> modern system will definitely help.

Well, is the RAM full? My 10 years old PC has 32 Gigs and still runs very 
smooth (with Intel integrated graphics).

> > Given the new rig can have 128GBs, I assume it comes in 32GB sticks.
> 
> Consumer DDR5 seems to come as large as 48GB, though that seems like
> an odd size.

Actually, my product search page finds sticks with up to 96 GB. I believe 
the 48 size was introduced because for those to whom 32 was too small, 64 
was too expensive. DDR5 still is relatively pricey due to its higher 
electrical requirements.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

It’s quiet in the shadow, because you can’t hear the light.


signature.asc
Description: PGP signature


Re: [gentoo-user] NAS box and switching from Phenom II X6 1090T to FX-6300

2024-04-15 Thread Frank Steinmetzger
Am Mon, Apr 15, 2024 at 08:04:15AM -0500 schrieb Dale:

> > The physical connector is called M.2. The dimensions of the “sticks” are 
> > given in a number such as 2280, meaning 22 mm wide and 80 mm long. There 
> > are 
> > different lengths available from 30 to 110 mm. M.2 has different “keys”, 
> > meaning there are several variants of electrical hookup. Depending on that, 
> > it can support SATA, PCIe, or both. NVMe is a protocol that usually runs 
> > via 
> > PCIe. So for a modern setup, one usually buys NVMe drives, meaning they are 
> > connected via PCIe either directly to the CPU or over the chipset.
> >
> 
> 
> Ahh, that's why some of them look a little different.  I was wondering
> about that.  Keep in mind, I've never seen one in real life.  Just
> pictures or videos, or people talking about them on this list. 

I use one in my 10-year-old PC. The board only provides PCIe 2.0×2 to the 
slot, so I only get around 1 GB/s instead of 3 which the SSD can reach. But 
I bought the SSD with the intention of keeping it in the next build and I 
don’t notice the difference anyways.

> > There is also the other way around that: an adapter card for the M.2 slot 
> > that gives you SATA ports.
> >
> 
> I didn't know that.

I actually thought we mentioned it already in an earlier “NAS thingy” 
thread. :)

https://www.reddit.com/r/selfhosted/comments/s0bf1d/m2_sata_expansion_anyone_use_something_like_this/
https://www.amazon.de/dp/B09FZDQ6ZB
Maybe you find something if you search for the controller chip (PCIe to 
SATA): JMB585. From what I’ve just read though, the cheap chines adapters 
don’t seem to be very sturdy. One person advised to put an adapter M.2 → 
normal PCIe into the M.2 and then use a normal-formfactor controller card. 
After all, an M.2 slot is just a PCIe×4 slot with a different connector.

BTW: there are also NVMe SSDs in the old 2.5″ format. This formfactor is 
called U.2, but beware the enterprise-level prices.

> I've seen some server type mobos that have SAS connectors which gives
> several options.  Some of them tend to have more PCIe slots which some
> regular mobos don't anymore.  Then there is that ECC memory as well.  If
> the memory doesn't cost to much more, I could go that route.  I'm not
> sure how much I would benefit from it but data corruption is a thing to
> be concerned about. 
> […]
> The problem with those cards, some of the newer mobos don't have as many
> PCIe slots to put those cards into anymore.  I think I currently have
> two such cards in my current rig.  The new rig would hold almost twice
> the number of drives.  Obviously, I'd need cards with more SATA ports. 

Indeed consumer boards tend to get fewer normal PCIe slots. Filtering for 
AM4 boards, the filter allowed me to filter up to 6 slots, whereas for AM5 
boards, the filter stopped at 4 slots.
AM4: https://skinflint.co.uk/?cat=mbam4&xf=18869_5%7E20502_UECCDIMM%7E4400_ATX
AM5: https://skinflint.co.uk/?cat=mbam5&xf=18869_4%7E20502_UECCDIMM%7E4400_ATX

> One reason I'm trying not to move to fast right now, besides trying to
> save up money, I'm trying to find the right CPU, mobo and memory combo. 
> None of them are cheap anymore.  Just the CPU is going to be around
> $400.  The mobo isn't to far behind if I go with a non server one. 

One popular choice for home servers is AM4’s Ryzen Pro 4650G. That’s an APU 
(so with powerful internal graphics), but also with ECC support (hence the 
Pro moniker). The APU is popular because 1) on AM4 only APUs have graphics 
at all, 2) it allows for use as a compact media server, as no bulky GPU is 
needed.

Speaking of GPU: We’ve had the topic before, but keep in mind that if you go 
with AM5, you don’t need a dGPU. Unless you go with one of those F 
processors. So there is one more slot available.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Order is one half of your life, but the other half is nicer.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: [gentoo-dev] Current unavoidable use of xz utils in Gentoo

2024-04-15 Thread Frank Steinmetzger
Am Sun, Mar 31, 2024 at 08:33:20AM -0400 schrieb Rich Freeman:
> (moving this to gentoo-user as this is really getting off-topic for -dev)
> […]
> We're going on almost 20 years since the Snowden revelations, and back
> then the NSA was basically doing intrusion on an industrial scale.

Weeaalll, it’s been 11 years in fact. Considering that is more than 10 
years, one could argue it is approaching 20. ;-)

I can remember the year well because Snowden is the same vintage as I am and 
he became 30 about when this all came out.

-- 
Grüße | Greetings | Salut | Qapla’
Others make mistakes, too -- but we have the most experience in it.


signature.asc
Description: PGP signature


Re: [gentoo-user] NAS box and switching from Phenom II X6 1090T to FX-6300

2024-04-15 Thread Frank Steinmetzger
Am Sat, Apr 13, 2024 at 08:23:27AM -0500 schrieb Dale:
> Rich Freeman wrote:
> > On Sat, Apr 13, 2024 at 8:11 AM Dale  wrote:
> >> My biggest thing right now, finding a mobo with plenty of PCIe slots.
> >> They put all this new stuff, wifi and such, but remove things I do need,
> >> PCIe slots.
> > PCIe and memory capacity seem to have become the way the
> > server/workstation and consumer markets are segmented.
> >
> > AM5 gets you 28x v5 lanes.  SP5 gets you 128x v5 lanes.  The server
> > socket also has way more memory capacity, though I couldn't quickly
> > identify exactly how much more due to the ambiguous way in which DDR5
> > memory channels are referenced all over the place.  Suffice it to say
> > you can put several times as many DIMMs into a typical server
> > motherboard, especially if you have two CPUs on it (two CPUs likewise
> > increases the PCIe capacity).
> 
> I see lots of mobos with those little hard drives on a stick.  I think
> they called NVME or something, may have spelling wrong.

The physical connector is called M.2. The dimensions of the “sticks” are 
given in a number such as 2280, meaning 22 mm wide and 80 mm long. There are 
different lengths available from 30 to 110 mm. M.2 has different “keys”, 
meaning there are several variants of electrical hookup. Depending on that, 
it can support SATA, PCIe, or both. NVMe is a protocol that usually runs via 
PCIe. So for a modern setup, one usually buys NVMe drives, meaning they are 
connected via PCIe either directly to the CPU or over the chipset.

> For most
> people, that is likely awesome.  For me, I think I'd be happy with a
> regular SSD.  Given that, I'd like them to make a mobo where one can say
> cut off/disable that NVME thing and make use of that "lane" as a PCIe
> slot(s).  Even if that means having a cable that hooks to the mobo and
> runs elsewhere to connect PCIe cards. In other words, have one slot
> that is expandable to say three or four slots with what I think is
> called a back-plane.

There is also the other way around that: an adapter card for the M.2 slot 
that gives you SATA ports.

> I have considered getting a server type mobo and CPU for my new build. 

The only reason I got a server board for my little 4-slot NAS is to get ECC 
support. (Plus you don’t get non-server Mini-ITX with more than four SATAs). 
But it runs the smallest i3 I could get. It’s a NAS, not a workstation. It 
serves files, nothing more. I don’t mind if updates take longer than on a 
Desktop, which is why I don’t see a point in speccing it out to the top 
CPU-wise. This only adds cost to acquisition and upkeep.

I just did the profile switch to 23, and it rebuilt 685 packages in a little 
over six hours, plus 1½ hours for gcc beforehand.

> As you point out, they are packed with features I could likely use. 

“Could likely”? Which features exactly? As you say yourself:

> Thing is, the price tag makes me faint and fall out of my chair.  Even
> used ones that are a couple years old, in the floor I go.  -_-  I looked
> up a SP5 AMD CPU, pushing $800 just for the CPU on Ebay, used.  The mobo
> isn't cheap either.  I don't know if that would even serve my purpose. 

Exactly. Those boards and CPUs are made to run servers that serve entire 
SMBs so that the employees can work on stuff at the same time. As a one-man 
entity, I don’t expect you’ll ever really need that raw power. If it’s just 
for SATA ports, you can get controller cards for those.

> The biggest thing I need PCIe slots for, drive controllers.  I thought
> about buying a SAS card and having it branch out into a LOT of drives. 
> Still, I might need two cards even then. 

But it would be the most logical choice.

> It's like looking at the cereal isle in a store.  All those choices and
> most of them . . . . are corn.  ROFL 

Nice one.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

I’ve been using vi for 15 years, because I don’t know with which command
to close it.


signature.asc
Description: PGP signature


Re: [gentoo-user] How to synchronise between 2 locations

2024-03-29 Thread Frank Steinmetzger
Am Thu, Mar 28, 2024 at 09:26:33PM -0500 schrieb Grant Taylor:
> On 3/27/24 13:58, J. Roeleveld wrote:
> > Hi all,
> 
> Hi,
> 
> > I am looking for a way to synchronise a filesystem between 2 servers. 
> > Changes
> > can occur on both sides which means I need to have it synchronise in both
> > directions.
> 
> What sort of turn around time are you looking for?  seconds, minus, hours,
> longer?
> 
> > Does anyone have any thoughts on this?
> 
> I would wonder about using rsync.

Rsync can’t handle file moves. Given:

> > Also, both servers are connected using a slow VPN link, which is why I can't
> > simply access files on the remote server.

it would be beneficial to conserve traffic as much as possible.

While you can use the -u flag to only overwrite if the source is newer than 
the destination, AFAIK rsync can’t detect if the destination has also been 
altered since the last sync, so it might clobber important changes. That’s 
why sync tools use a metadata cache to remember last edit timestamps.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

In the 60's people took acid to make the world weird.
Now the world is weird and people take Prozac to make it normal.


signature.asc
Description: PGP signature


Re: [gentoo-user] How to synchronise between 2 locations

2024-03-29 Thread Frank Steinmetzger
Am Thu, Mar 28, 2024 at 05:33:43PM +0100 schrieb ralfconn:
> Il 28/03/24 07:30, J. Roeleveld ha scritto:
> > > Unison creates a local index of all files it syncronised. So when you 
> > > move a
> > > file around on one end, Unison will notice that because the file at the 
> > > new
> > > location has the same hash as the file at the old location. As a result, 
> > > it
> > > does not transmit the file anew to the remote host, but instead copies it
> > > locally on the remote host.
> > > 
> > > Since Unison uses ssh underneath, you can use ssh’s transparent 
> > > compression
> > > to speed up the transfer.
> > Unison sounds interesting. How does it handle conflicts (eg, file is 
> > changed on
> > both sides?)
> > 
> I use Unison GUI on one of the two machines (on the other peer it's just a
> program invoked from the ssh). When the analysis is complete, the GUI shows
> what it would do to sync the machines, indicating the conflicts and giving
> you the chance to choose what to do.
> 
> I believe it can be used from the command line or maybe even in batch mode
> instead of GUI but I never did it that way.

You can set up a merge command to solve conflicts on the cmdline, such as 
vimdiff. But when I set that, it blocks the GUI. Maybe I did something wrong 
with the setup. Anyways, when I get a conflict, I make a backup of the file 
locally, overwrite it with the remote and then do a conflict resolution with 
vim.

In my every-day workflow, I usually only get conflicts in text files (logs, 
notes, and so on). Binary conflicts are rare and usually due to recent 
actions, such as editing an image or music file. In that case I can decide 
on a per-case-basis which version to keep.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

How does the Heisenberg compensator work? – Thank you, fine.


signature.asc
Description: PGP signature


Re: [gentoo-user] How to synchronise between 2 locations

2024-03-29 Thread Frank Steinmetzger
Am Thu, Mar 28, 2024 at 01:08:03AM +0100 schrieb Alarig Le Lay:
> On Wed 27 Mar 2024 20:37:27 GMT, Frank Steinmetzger wrote:
> > +1 for Unison. I’ve been using it for many years now to synchronise between 
> > the four PC systems in my household.
> > 
> > Unison creates a local index of all files it syncronised. So when you move 
> > a 
> > file around on one end, Unison will notice that because the file at the new 
> > location has the same hash as the file at the old location. As a result, it 
> > does not transmit the file anew to the remote host, but instead copies it 
> > locally on the remote host.
> > 
> > Since Unison uses ssh underneath, you can use ssh’s transparent compression 
> > to speed up the transfer.
> 
> I’ve been thinking about using it to synchronise dovecot maildir folders,
> since dsync is now deprecated. But I’m not sure about it as I never used
> it under “heavy” loads. Do you have any thoughts about it?

Among the many files I sync is my Mail folder. It currently contains around
106k files. My music collection is another 50k files. If there are no 
changes and I did a recent run, then another run to look for changes takes 
only very few seconds.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

A computer only crashes if your haven’t saved your text for a long time.


signature.asc
Description: PGP signature


Re: [gentoo-user] How to synchronise between 2 locations

2024-03-27 Thread Frank Steinmetzger
Am Wed, Mar 27, 2024 at 03:59:18PM -0400 schrieb Matt Connell:

> > Syncthing is also a good idea. The major difference: syncthing is a 
> > permanently running daemon, so changes are synced very fast (the
> > interval is configurable, IIRC). OTOH, Unison is run individually by
> > you. That’s why I prefer the latter: in case I broke some file on my
> > machine, I can get it back from another machine without having to
> > break out the backup disk (which may not even have what I need
> > because my backup interval is too big).
> 
> Good point.  I mainly use syncthing as a "stuff I need on multiple
> machines" bucket, rather than a big directory of active working files.

I sync most of my files on the home and media partition. So all my 
documents, photos and music library. I do use syncthing -- between my 
android phone and PC. Because I don’t like to fiddle with the filesystem on 
the very constrained UI of touch devices.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Can you give me a cigarette?  Mine are still in the vending machine.


signature.asc
Description: PGP signature


Re: [gentoo-user] How to synchronise between 2 locations

2024-03-27 Thread Frank Steinmetzger
Am Wed, Mar 27, 2024 at 03:42:07PM -0400 schrieb Matt Connell:
> On Wed, 2024-03-27 at 19:58 +0100, J. Roeleveld wrote:
> > Hi all,
> > 
> > I am looking for a way to synchronise a filesystem between 2 servers.
> > Changes can occur on both sides which means I need to have it
> > synchronise in both directions.
> > 
> > Does anyone have any thoughts on this?
> > 
> > Also, both servers are connected using a slow VPN link, which is why
> > I can't simply access files on the remote server.
> 
> I've been using syncthing for years and am extremely pleased with it. 
> It works so well that I sometimes forget that its there, truly in the
> It Just Works category of software.

Syncthing is also a good idea. The major difference: syncthing is a 
permanently running daemon, so changes are synced very fast (the interval is 
configurable, IIRC). OTOH, Unison is run individually by you. That’s why I 
prefer the latter: in case I broke some file on my machine, I can get it 
back from another machine without having to break out the backup disk (which 
may not even have what I need because my backup interval is too big).

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

I have seen things you lusers would not believe.
I've seen Sun monitors on fire off the side of the multimedia lab.
I've seen NTU lights glitter in the dark near the Mail Gate.
All these things will be lost in time, like the root partition last week.
Time to die.


signature.asc
Description: PGP signature


Re: [gentoo-user] How to synchronise between 2 locations

2024-03-27 Thread Frank Steinmetzger
Am Wed, Mar 27, 2024 at 08:18:14PM +0100 schrieb ralfconn:
> Il 27/03/24 19:58, J. Roeleveld ha scritto:
> > Hi all,
> > 
> > I am looking for a way to synchronise a filesystem between 2 servers. 
> > Changes
> > can occur on both sides which means I need to have it synchronise in both
> > directions.
> > 
> > Does anyone have any thoughts on this?
> > 
> > Also, both servers are connected using a slow VPN link, which is why I can't
> > simply access files on the remote server.


+1 for Unison. I’ve been using it for many years now to synchronise between 
the four PC systems in my household.

> I use it just for that but can't say anything about the VPN bit, my
> servers are on local network.

Unison creates a local index of all files it syncronised. So when you move a 
file around on one end, Unison will notice that because the file at the new 
location has the same hash as the file at the old location. As a result, it 
does not transmit the file anew to the remote host, but instead copies it 
locally on the remote host.

Since Unison uses ssh underneath, you can use ssh’s transparent compression 
to speed up the transfer.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

If all people were the same, one would basically suffice.


signature.asc
Description: PGP signature


Re: [gentoo-user] How do I zap a specific area of a gnumeric spreadsheet page?

2024-03-21 Thread Frank Steinmetzger
Am Thu, Mar 21, 2024 at 05:46:31PM -0400 schrieb Walter Dnes:

>   The province of Ontario does weekly Covid data updates which I
> summarize and post on the DSLReports Canchat subforum, e.g.
> https://www.dslreports.com/forum/r33854514-#google_vignette  Note the
> data gap in the pink and brown lines on the 3rd and 4th graphs.  That's
> actual missing data.  In the underlying spreadsheet page those gaps
> initially show up as zeros.  I manually blank out region B1258:C1299
> (i.e. 2023/09/09 to 2023/10/20) every week when I update so that it
> doesn't show up as zero hospitalizations.  How do I set up and execute a
> macro to to zap the contents of region B1258:C1299 on a page?

Why not make the alteration one step before -- in the CSV?
There are CSV abstraction tools like `q`, which gives you a SQL-like 
interface to a csv file. Or you could write a quick transformer in python, 
if you know the language a bit.

Pseudo code, as I haven’t worked with csv in Python in a looong time:

import csv
with csv.open("input file", 'r') as A:
with csv.open("output file", 'w') as B:
for rownum, row in enumerate(A):
if rownum >= 1258 or rownum <= 1299:
# write a modified row which has columns B and C blanked
B.write( [row[0] + ['', ''] + row[3:] )
else:
B.write(row)

>   Note that I have to first remove the previous week's file, because wget
> won't overwrite it, and skips the download altogether.

Maybe remove the -r from rm, just to peace of mind. Also, such minimalist 
scripts that don’t use bash features can be sh scripts instead. This 
increases performance, as sh loads faster than bash. ;-)

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

It is not enough to have no opinion, you must also be unable to express it.


signature.asc
Description: PGP signature


Re: [gentoo-user] Suggestions for backup scheme?

2024-02-07 Thread Frank Steinmetzger
Am Tue, Jan 30, 2024 at 06:15:09PM - schrieb Grant Edwards:
> I need to set up some sort of automated backup on a couple Gentoo
> machines (typical desktop software development and home use). One of
> them used rsnapshot in the past but the crontab entries that drove
> that have vanished :/ (presumably during a reinstall or upgrade --
> IIRC, it took a fair bit of trial and error to get the crontab entries
> figured out).
> 
> I believe rsnapshot ran nightly and kept daily snapshots for a week,
> weekly snapshots for a month, and monthly snapshots for a couple
> years.
> 
> Are there other backup solutions that people would like to suggest I
> look at to replace rsnapshot?  I was happy enough with rsnapshot (when
> it was running), but perhaps there's something else I should consider?

In my early backup times I, too, used rsnapshot to back up my ~ and rsync 
for my big media files. But that only included my PC. My laptop was wholly 
un-backed-up. I only syncronised much of my home and my audio collection 
between the two with unison. At some point my external 3 TB drive became 
free and then I started using borg to finally do proper backups.

Borg is very similar to restic, I actually used the two in parallel for a 
while to compare them, but stayed with borg. One pain point was that I 
couln’t switch off restic’s own password protection. Since all my backup 
disks are LUKSed anyway, I don’t need that.

Since borg works block-based, it does deduplication without extra cost and 
it is suitable for big image files which don’t change much. I do full 
filesystem backups of /, ~ and my media partition of my main PC and my 
laptop. I have one repository for each of those three filesystems, and each 
repo receives the data from both machines, so they are deduped. Since both 
machines run Arch, their roots are binary identical. The same goes for my 
unison-synced homes.

Borg has retention logic built-in. You can say I want to keep the latest 
archive of each of the last 6 days/weeks/months/years, and it even goes down 
to seconds. And of course you can combine those rules. The only thing is 
they don’t overlap, meaning if you want to keep the last 14 days and the 
last four weeks, those weekly retentions start after the last daily 
snapshots.

In summary, advantages:
+ fast dedup, built-in compression (different algos and levels configurable)
+ big data files allow for quick mirroring of repositories.
  I simply rsync my primary backup disk to two other external HDDs.
+ Incremental backups are quite fast because borg uses a cache to detect
  changed files quickly.
Disadvantages:
- you need borg to mount the backups it
- it is not as fast as native disk access, especially during restore and 
  when getting a total file listing due to lots of random I/O on the HDD.


As example, I currently have 63 snapshots in my data partition repository:

# borg list data/
tp_2021-06-07   Mon, 2021-06-07 16:27:44 
[5f9ebd9f24353c340691b2a71f5228985a41699d2e23473ae4e9e795669c8440]
kern_2021-06-07 Mon, 2021-06-07 23:58:56 
[19c76211a9c35432e6a66ac1892ee19a08368af28d2d621f509af3d45f203d43]
[... 55 more lines ...]
kern_2024-01-14 Sun, 2024-01-14 20:53:23 
[499ce7629e64cffb7ec6ec9ffbf0c595e4ede3d93f131a9a4b424b165647f645]
tp_2024-01-14   Sun, 2024-01-14 20:57:42 
[ea2baef3e4bb49c5aec7cf8536f7b00b55fb27ecae3a80ef9f5a5686a1da30d5]
kern_2024-01-21 Sun, 2024-01-21 23:42:46 
[71aa2ce6cf4021712f949af068498bfda7797b5d1c5ddc0f0ce8862b89e48961]
tp_2024-01-21   Sun, 2024-01-21 23:48:24 
[45e35ed9206078667fa62d0e4a1ac213e77f52415f196101d14ee21e79fc393d]
kern_2024-02-04 Sun, 2024-02-04 23:16:43 
[e1b015117143fad6b89cea66329faa888cffc990644e157b1d25846220c62448]
tp_2024-02-04   Sun, 2024-02-04 23:23:15 
[e9b167ceec1ab9a80cbdb1acf4ff31cd3935fc23e81674cad1b8694d98547aeb]

The last “tp” (Thinkpad) snapshot contains 1 TB, “kern” (my PC) 809 GB.
And here you see how much space this actually takes on disk:

# borg info data/
[ ... ]
 Original size   Compressed sizeDeduplicated size
All archives: 56.16 TB  54.69 TB  1.35 TB

Obviously, compression doesn’t do much for media files. But it is very 
effective in the repository for the root partitions:

# borg info arch-root/
[ ... ]
 Original size   Compressed sizeDeduplicated size
All archives:  1.38 TB 577.58 GB 79.41 GB

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

“She understands.  She doesn’t comprehend.” – River Tam, Firefly


signature.asc
Description: PGP signature


Re: [gentoo-user] rsync options after backup restore. Transfer speed again.

2023-10-23 Thread Frank Steinmetzger
Am Mon, Oct 23, 2023 at 02:29:26AM -0500 schrieb Dale:
> Dale wrote:
> >
> > Second problem.  The transfer speed is back to the old slower speed. 
> > I'm pretty sure I am using the same old options on both ends.  Still,
> > it's back to being slow again.  Some info:
> >
> >
> > <<< SNIP >>>
> > Did I miss something?  Typo maybe?  I'm pretty sure I used copy and
> > paste but still. 
> >
> > Thanks.
> >
> > Dale
> >
> > :-)  :-) 
> 
> 
> I been working on the speed problem again.  I rebuilt the kernel on
> fireball and I think some changes made a huge change.  This is the
> results from fireball now:
> 
> 
> root@fireball / # cryptsetup benchmark
> # Tests are approximate using memory only (no storage IO).
> PBKDF2-sha1   931239 iterations per second for 256-bit key
> PBKDF2-sha256    1356501 iterations per second for 256-bit key
> PBKDF2-sha512 972705 iterations per second for 256-bit key
> PBKDF2-ripemd160  648871 iterations per second for 256-bit key
> PBKDF2-whirlpool  362077 iterations per second for 256-bit key
> argon2i   5 iterations, 1048576 memory, 4 parallel threads (CPUs)
> for 256-bit key (requested 2000 ms time)
> argon2id  4 iterations, 1048576 memory, 4 parallel threads (CPUs)
> for 256-bit key (requested 2000 ms time)
> # Algorithm |   Key |  Encryption |  Decryption
>     aes-cbc    128b   570.8 MiB/s  2045.6 MiB/s
>     serpent-cbc    128b    91.1 MiB/s   310.0 MiB/s
>     twofish-cbc    128b   198.7 MiB/s   218.9 MiB/s
>     aes-cbc    256b   428.8 MiB/s  1670.4 MiB/s
>     serpent-cbc    256b    91.6 MiB/s   309.5 MiB/s
>     twofish-cbc    256b   199.8 MiB/s   219.2 MiB/s
>     aes-xts    256b  1821.2 MiB/s  1767.1 MiB/s
>     serpent-xts    256b   265.9 MiB/s   270.2 MiB/s
>     twofish-xts    256b   201.0 MiB/s   204.2 MiB/s
>     aes-xts    512b  1440.0 MiB/s  1445.9 MiB/s
>     serpent-xts    512b   265.0 MiB/s   257.2 MiB/s
>     twofish-xts    512b   198.2 MiB/s   201.6 MiB/s
> root@fireball / #

There you go. Told ya. :)

> As you can see, aes-cbc is fast now and I think that is what cryptsetup
> uses.  It used to be really slow I think. 

Cryptsetup uses aes-xts these days, I think it’s been mentioned in this 
thread somewhere.

> Now on to the nas box.  I've recompiled the kernel with some added
> options.  Still, it refuses to speed up.  I kinda think it is the CPU
> lacking support for encryption.  I'm asking others just in case I'm
> missing something.  Also, fireball uses a older kernel, 5.14 or so.  The
> nas box uses 6.1 or so.  The menus are different and that is why it is
> hard to get them to match up.  I may have missed something.

Everything you need for that is in the crypto menu at the bottom.

> This is the bench mark from nas box. 
> 
> nas ~ # cryptsetup benchmark
> # Tests are approximate using memory only (no storage IO).
> PBKDF2-sha1   700919 iterations per second for 256-bit key
> PBKDF2-sha256 924670 iterations per second for 256-bit key
> PBKDF2-sha512 729190 iterations per second for 256-bit key
> PBKDF2-ripemd160  517559 iterations per second for 256-bit key
> PBKDF2-whirlpool  359593 iterations per second for 256-bit key
> argon2i   4 iterations, 1048576 memory, 4 parallel threads (CPUs)
> for 256-bit key (requested 2000 ms time)
> argon2id  4 iterations, 1048576 memory, 4 parallel threads (CPUs)
> for 256-bit key (requested 2000 ms time)
> # Algorithm |   Key |  Encryption |  Decryption
>     aes-cbc    128b    63.6 MiB/s    41.6 MiB/s
>     serpent-cbc    128b    81.0 MiB/s   212.4 MiB/s
>     twofish-cbc    128b   192.5 MiB/s   222.1 MiB/s
>     aes-cbc    256b    47.5 MiB/s    30.0 MiB/s
>     serpent-cbc    256b    81.2 MiB/s   212.7 MiB/s
>     twofish-cbc    256b   192.3 MiB/s   221.9 MiB/s
>     aes-xts    256b    65.9 MiB/s    41.6 MiB/s
>     serpent-xts    256b   201.7 MiB/s   205.7 MiB/s
>     twofish-xts    256b   216.2 MiB/s   214.5 MiB/s
>     aes-xts    512b    48.8 MiB/s    30.0 MiB/s
>     serpent-xts    512b   202.7 MiB/s   205.6 MiB/s
>     twofish-xts    512b   216.4 MiB/s   214.0 MiB/s
> nas ~ #
> […]
> The aes shows up on fireball.  It does not on the nas box.  Is the speed
> above as good as I can expect with this older CPU?

If not done yet, you can check whether you enabled the 64 bit versions of 
the crypto modules. They could push performance by a few more percent.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The last chance is often the second-last, if you look close enough.


signature.asc
Description: PGP signature


Re: [gentoo-user] rsync options after backup restore. Transfer speed again.

2023-10-21 Thread Frank Steinmetzger
Am Fri, Oct 20, 2023 at 09:20:45PM -0500 schrieb Dale:
> Howdy,
> 
> As most know, I had to restore from backups recently.  I also reworked
> my NAS box.  I'm doing my first backup given that I have more files that
> need to be added to the backups.  When I started the rsync, it's
> starting from the first file and updating each file as it goes as if all
> of them changed.  Given that likely 95% of the files hasn't changed, I
> figure this is being done because of a time stamp or something.  Is
> there a way to tell rsync to ignore the time stamp or something or if
> the files are the same size, just update the time stamp?  Is there a way
> to just update the time stamps on the NAS box?  Is there a option I
> haven't thought of to work around this? 
> 
> This is the old command I was using to create the backups.
> 
> time rsync -uivr --progress --delete /home/dale/Desktop/Crypt/TV_Series
> /mnt/TV_Backup/

This didn’t preserve timestamps. Hence there is one type of information lost 
from which rsync knows whether two files may be identical. So now your 
restore has more recent timestamps the the backup. If you use -u, Rsync 
should skip all files.

My perfectionist self doesn’t like discarding timestamp information, because 
then my system can’t tell me how old some file is, and how old (or young) I 
was when I created it and so on. I once didn’t pay enough attention when 
restoring a backup back when I was still on Windows, which is why I don’t 
have many files left that are dated before April 2007, even though they are 
from 2000 ± x.

BTW: -i and -v are redundant. -v will only print the file path, whereas -i 
does the same and adds the reasons colum at the beginning.

> I tried these to try to get around it.
> 
> time rsync -ar --progress --delete /home/dale/Desktop/Crypt/TV_Series 
> /mnt/TV_Backup/

-a and -r are also redundant, as -a includes -r.

> I looked at the man page and the options there.  I don't see anything
> that I think will help.  Is there a way around this? 

My muscle memory uses `rsync -ai` for almost everything. And when I do full 
root file systems or stuff where I know I will need them, I use -axAHX 
instead. Since this preserves all the usual data, I’ve never really had 
rsync wanting to do everything all over.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Arrogance is the art of being proud of one’s own stupidity.


signature.asc
Description: PGP signature


Re: [gentoo-user] Getting output of a program running in background after a crash

2023-10-13 Thread Frank Steinmetzger
Am Thu, Oct 12, 2023 at 08:35:21PM -0500 schrieb Dale:

> Frank Steinmetzger wrote:
> > Am Thu, Oct 12, 2023 at 10:44:39PM +0100 schrieb Michael:
> >
> >> Why don't you test throughput without encryption to confirm your 
> >> assumption?
> > What does `cryptsetup benchmark` say? I used to use a Celeron G1840 in my 
> > NAS, which is Intel Haswell without AES_NI. It was able to do ~ 150 MB/s 
> > raw 
> > encryption throughput when transferring to or from a LUKS’ed image in a 
> > ramdisk, so almost 150 % of gigabit ethernet speed.
> […]
> I've never used that benchmark.  Didn't know it exists.  This is the
> results.  Keep in mind, fireball is my main rig.  The FX-8350 thingy. 
> The NAS is currently the old 770T system.  Sometimes it is a old Dell
> Inspiron but not this time.  ;-)
> 
> root@fireball / # cryptsetup benchmark
> […]
> # Algorithm |   Key |  Encryption |  Decryption
>     aes-cbc    128b    63.8 MiB/s    51.4 MiB/s
>     serpent-cbc    128b    90.9 MiB/s   307.6 MiB/s
>     twofish-cbc    128b   200.4 MiB/s   218.4 MiB/s
>     aes-cbc    256b    54.6 MiB/s    37.5 MiB/s
>     serpent-cbc    256b    90.4 MiB/s   302.6 MiB/s
>     twofish-cbc    256b   198.2 MiB/s   216.7 MiB/s
>     aes-xts    256b    68.0 MiB/s    45.0 MiB/s
>     serpent-xts    256b   231.9 MiB/s   227.6 MiB/s
>     twofish-xts    256b   191.8 MiB/s   163.1 MiB/s
>     aes-xts    512b    42.4 MiB/s    18.9 MiB/s
>     serpent-xts    512b   100.9 MiB/s   124.6 MiB/s
>     twofish-xts    512b   154.8 MiB/s   173.3 MiB/s
> root@fireball / #

Phew, this looks vry slow. As you can clearly see, this is not enough to 
even saturate Gbit ethernet. Unfortunately, I don’t have any benchmark data 
left over from the mentioned celeron.
(Perhaps that’s why the industry chose to implement AES in hardware, because 
it was the slowest of the bunch.)

It looks like there is no hardware acceleration involved. But according to 
https://en.wikipedia.org/wiki/List_of_AMD_FX_processors#Piledriver-based and 
https://www.cpu-world.com/CPUs/Bulldozer/AMD-FX-Series%20FX-8350.html it has 
the extension. I’d say something is amiss in your kernel.

Heck, even my ultra-low-end eeepc with its no-AES Atom processor N450 from 
2009 is less than 50 % slower, and for aes-xts 512b it is actually faster! 
And that was a snail even in its day. It is so low-end that its in-order 
architecture is not vulnerable to spectre and meltdown. :D It just scrunched 
several minutes on updating the GPG keyring of its arch linux installation.

eeePC # LC_ALL=C cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1   228348 iterations per second for 256-bit key
PBKDF2-sha256 335222 iterations per second for 256-bit key
PBKDF2-sha512 253034 iterations per second for 256-bit key
PBKDF2-ripemd160  172690 iterations per second for 256-bit key
PBKDF2-whirlpool   94705 iterations per second for 256-bit key
argon2i   4 iterations, 71003 memory, 4 parallel threads (CPUs) for 256-bit 
key (requested 2000 ms time)
argon2id  4 iterations, 71506 memory, 4 parallel threads (CPUs) for 256-bit 
key (requested 2000 ms time)
# Algorithm |   Key |  Encryption |  Decryption
aes-cbc128b31.0 MiB/s33.6 MiB/s
serpent-cbc128b28.1 MiB/s62.9 MiB/s
twofish-cbc128b28.6 MiB/s31.0 MiB/s
aes-cbc256b24.0 MiB/s25.6 MiB/s
serpent-cbc256b28.3 MiB/s62.7 MiB/s
twofish-cbc256b28.6 MiB/s31.0 MiB/s
aes-xts256b32.5 MiB/s33.4 MiB/s
serpent-xts256b50.5 MiB/s60.5 MiB/s
twofish-xts256b25.6 MiB/s30.7 MiB/s
aes-xts512b25.0 MiB/s25.6 MiB/s
serpent-xts512b60.2 MiB/s60.4 MiB/s
twofish-xts512b30.2 MiB/s30.7 MiB/s

> root@nas:~# cryptsetup benchmark
> […]
> # Algorithm |   Key |  Encryption |  Decryption
>     aes-cbc    128b   130.6 MiB/s   128.0 MiB/s
>     serpent-cbc    128b    64.7 MiB/s   161.8 MiB/s
>     twofish-cbc    128b   175.4 MiB/s   218.8 MiB/s
>     aes-cbc    256b   120.1 MiB/s   122.2 MiB/s
>     serpent-cbc    256b    84.5 MiB/s   210.8 MiB/s
>     twofish-cbc    256b   189.5 MiB/s   218.6 MiB/s
>     aes-xts    256b   167.0 MiB/s   162.1 MiB/s
>     serpent-xts    256b   173.9 MiB/s   204.5 MiB/s
>     twofish-xts

Re: [gentoo-user] Getting output of a program running in background after a crash

2023-10-12 Thread Frank Steinmetzger
Am Thu, Oct 12, 2023 at 10:44:39PM +0100 schrieb Michael:

> >  It only does this when I'm copying files over.  Right now I'm copying
> >  about 26TBs of data over ethernet and it is taking a while.  Once I
> >  stop it or it finishes the copy, the CPU goes to about nothing,
> >  unless I'm doing something else.  So it has something to do with the
> >  copy process.
> > >>> 
> > >>> Or the network. What are you using to copy? If you use rsync, you can
> > >>> make use the the --bwlimit option to reduce the speed and network
> > >>> load.
> > >> 
> > >> Reduce?  I wouldn't complain if it went faster.  I think it is about as
> > >> fast as it is going to get tho.
> > > 
> > > And that may be contributing to the CPU usage. Slowing down the flow may
> > > make the comouter more usable, and you're never going to copy 26TB
> > > quickly, especially over ethernet.
> > > 
> > >> While I'm not sure what is keeping me from copying as fast as the drives
> > >> themselves can go, I suspect it is the encryption.
> 
> Why don't you test throughput without encryption to confirm your assumption?

What does `cryptsetup benchmark` say? I used to use a Celeron G1840 in my 
NAS, which is Intel Haswell without AES_NI. It was able to do ~ 150 MB/s raw 
encryption throughput when transferring to or from a LUKS’ed image in a 
ramdisk, so almost 150 % of gigabit ethernet speed.

> > > If you're copying over the network, that will be the limiting factor.
> > 
> > Someone posted some extra options to mount with and add to exports
> > file.

Ah right, you use NFS. If not, I’d have suggested not to use rsync over ssh, 
because that would indeed introduce a lot of encryption overhead.

> > I still think encryption is slowing it down some.  As you say tho,
> > ethernet isn't helping which is why I may look into other options later,
> > faster ethernet or fiber if I can find something cheap enough. 
> 
> There are a lot of hypotheses in your statements, but not much testing to 
> prove or disprove any of them.
> 
> Why don't you try to isolate the cause by testing one system element at a 
> time 
> and see what results you get.
> […]
> Unless you're running Pentium 4 or some other old CPU, it is almost certain 
> your CPU is capable of using AES-NI to offload to hardware some/all of the 
> encryption/decryption load - as long as you have the crypto module built in 
> your kernel.

The FX-8350 may be old, but it actually does have AES instructions.

Here is my Haswell i5 (only two years younger than the FX) with AES_NI:

~ LC_ALL=C cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1  1323959 iterations per second for 256-bit key
PBKDF2-sha2561724631 iterations per second for 256-bit key
PBKDF2-sha5121137284 iterations per second for 256-bit key
PBKDF2-ripemd160  706587 iterations per second for 256-bit key
PBKDF2-whirlpool  510007 iterations per second for 256-bit key
argon2i   7 iterations, 1048576 memory, 4 parallel threads (CPUs) for 
256-bit key (requested 2000 ms time)
argon2id  7 iterations, 1048576 memory, 4 parallel threads (CPUs) for 
256-bit key (requested 2000 ms time)
# Algorithm |   Key |  Encryption |  Decryption
aes-cbc128b   679.8 MiB/s  2787.0 MiB/s
serpent-cbc128b91.4 MiB/s   582.1 MiB/s
twofish-cbc128b   194.9 MiB/s   368.3 MiB/s
aes-cbc256b   502.3 MiB/s  2155.4 MiB/s
serpent-cbc256b90.3 MiB/s   582.5 MiB/s
twofish-cbc256b   194.0 MiB/s   368.6 MiB/s
aes-xts256b  2470.8 MiB/s  2478.7 MiB/s
serpent-xts256b   537.4 MiB/s   526.1 MiB/s
twofish-xts256b   347.3 MiB/s   347.3 MiB/s
aes-xts512b  1932.6 MiB/s  1958.0 MiB/s
serpent-xts512b   532.9 MiB/s   522.9 MiB/s
twofish-xts512b   348.4 MiB/s   348.9 MiB/s

The 6 Watts processor in my Surface Go yields:
aes-xts512b  1122,2 MiB/s  1123,7 MiB/s

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The severity of the itch is inversely proportional to the reach.


signature.asc
Description: PGP signature


Re: [gentoo-user] Getting output of a program running in background after a crash

2023-10-09 Thread Frank Steinmetzger
Am Sun, Oct 08, 2023 at 07:44:06PM -0500 schrieb Dale:

> Just as a update.  The file system I was trying to do a file system
> check on was my large one, about 40TBs worth.  While running the file
> system check, it started using HUGE amounts of memory.  It used almost
> all my 32GBs and most of swap as well.  It couldn't finish due to not
> enough memory, it literally crashed itself.  So, I don't know if this is
> because of some huge problem or what but if this is expected behavior,
> don't try to do a file system check on devices that large unless you
> have a LOT of memory. 

Or use a different filesystem. O:-)

> I ended up recreating the LVM devices from scratch and redoing the
> encryption as well.  I have backups tho.  This all started when using
> pvmove to replace a hard drive with a larger drive.  I guess pvmove
> isn't always safe.

I think that may be a far-fetched conclusion. If it weren’t safe, it 
wouldn’t be in the software – or at least not advertised as safe.

> P. S.  I currently have my backup system on my old Gigabyte 770T mobo
> and friends.  It is still a bit slower than copying when no encryption
> is used so I guess encryption does slow things down a bit.  That said,
> the CPU does hang around 50% most of the time.  htop doesn't show what
> is using that so it must be IO or encryption.

You can add more widgets (“meters”) to htop, one of them shows disk 
throughput. But there is none for I/O wait. One tool that does show that is 
glances. And also dstat which I mentioned a few days ago. Not only can dstat 
tell you the total percentage, but also which process is the most expensive 
one.

I set up bash aliases for different use cases of dstat:
alias ,d='dstat --time --cpu --disk -D $(ls /dev/sd? /dev/nvme?n? /dev/mmcblk? 
2>/dev/null | tr "\n" ,) --net --mem --swap'
alias ,dd='dstat --time --cpu --disk --disk-util -D $(ls /dev/sd? /dev/nvme?n? 
/dev/mmcblk? 2>/dev/null | tr "\n" ,) --mem-adv'
alias ,dm='dstat --time --cpu --disk -D $(ls /dev/sd? /dev/nvme?n? /dev/mmcblk? 
2>/dev/null | tr "\n" ,) --net --mem-adv --swap'
alias ,dt='dstat --time --cpu --disk -D $(ls /dev/sd? /dev/nvme?n? /dev/mmcblk? 
2>/dev/null | tr "\n" ,) --net --mem --swap --top-cpu --top-bio --top-io 
--top-mem'

Because I attach external storage once in a while, I use a dynamic list of 
devices to watch that is passed to the -D argument. If I don’t use -D, dstat 
will only show a total for all drives.

The first is a simple overview (d = dstat).

The second is the same but only for disk statistics (dd = dstat disks). I 
use it mostly on my NAS (five SATA drives in total, which creates a very 
wide table).

The third shows more memory details like dirty cache (dm = dstat memory), 
which is interesting when copying large files.

And the last one shows the top “pigs”, i.e. expensive processes in terms of 
CPU, IO and memory (dt = dstat top).

> Or something kernel
> related that htop doesn't show.  No idea. 

Perhaps my tool tips give you ideas. :)

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

What is the difference between two flutes? – A semitone.


signature.asc
Description: PGP signature


Re: [gentoo-user] Switching from desktop to desktop without function keys.

2023-10-02 Thread Frank Steinmetzger
Am Sun, Oct 01, 2023 at 06:44:09PM -0500 schrieb Dale:
> Frank Steinmetzger wrote:
> > Am Sat, Sep 30, 2023 at 10:47:31PM +0100 schrieb Peter Humphrey:
> >> On Saturday, 30 September 2023 20:59:04 BST Dale wrote:
> >>
> >>> Is there a way with the keyboard to switch to a desktop above 10?  Even
> >>> if it just switches up one at a time, that would work.  Say switch to 10
> >>> and then keep hitting a set of keys to go to 11, then 12, then 13 etc
> >>> etc.  Eventually, I get to the one I want. 
> >> You can set up a key combination to switch one desktop to the right and 
> >> another to switch one to the left. It's under Shortcuts, where you select 
> >> KWin, then scroll the right-hand panel down to Walk through Desktop List, 
> >> where I've put what used to be the default value: CTRL-ALT-right. Then 
> >> down 
> >> one to the next entry and enter CTRL-ALT-left. The defaults are left blank 
> >> nowadays.
> > Also note that a good while ago Plasma switched from using the Alt key to 
> > the Super key for everything Window-managerial. So these days, if you 
> > create 
> > a new user from scratch, it’s Ctrl+Super instead of Ctrl+Alt.
> >
> 
> 
> Looking at my keyboard, I have a key that looks like a Microsoft thing. 
> It looks like a window that is moving and I think is sometimes called a
> Microsoft key and might be called the meta key in Linux.

Well, you see a window on it. It is the Windows key. ;-)
The Linux world uses the more general term Super key. I think some also use 
Meta, but AFAIR there was also a Meta key on Solaris keyboards (labeled with 
a diamond shape).

> I have another key only on
> the right side between the Ctrl and the flying window key that looks
> like a document with a mouse pointer on it.

That’s the menu key. It calls up the context menu as if you clicked the 
right mouse button. It seems to have gone out of fashion with some 
manufacturers, especially on laptops and keyboards with a Fn key. Which is a 
bummer, because it forced me to assign a key combo in X11 to emulate it 
(Shift+Ctrl in my case). And this doesn’t work in Wayland.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

I used to avoid work, today I could just watch for hours.


signature.asc
Description: PGP signature


Re: [gentoo-user] 6.1.53-gentoo-r1 kernel not booting

2023-10-01 Thread Frank Steinmetzger
Am Sat, Sep 30, 2023 at 11:25:46PM +0200 schrieb Håkon Alstadheim:
> 
> Den 30.09.2023 22:57, skrev Valmor F. de Almeida:
> > 
> > Hello,
> > 
> > For a while now (3 weeks or so) I have been upgrading the linux kernel
> > on a Dell XPS laptop starting from 6.1.41-gentoo (which is my current
> > working kernel) to 6.1.53-gentoo-r1. No kernel I have built since is
> > able to boot. I have been following the same method for many years: make
> > oldconfig, etc...
> > 
> > The booting error starts at:
> > 
> > [snip]
> > 
> > * INIT: Entering runlevel: 3
> > [snip]
> > * Starting cronie ...
> > * Starting DHCP Client Daemon ...
> > * Starting laptop_mode ...
> > * Mounting network filesystems ...
> > /etc/init.d/netmount: line 45 /lib/rc/bin/ewend: Input/output error
> > /lib/rc/sh/rc-cgroup.sh: line 184: rmdir: command not found
> > INIT:
> > INIT: cannot execute "/sbin/agetty"
> > INIT: cannot execute "/sbin/agetty"
> > INIT: cannot execute "/sbin/agetty"
> > 
> > 
> Can you show /etc/fstab and the console-log for the entire boot? Seems /sbin
> is not readable. You sure you have the kernel modules loaded? Are you using
> an initramfs? If so, does that build without errors ?

The input/output error – to me – indicates a hardware problem. When you 
mounted the FS by hand, can you read ewend? For instance with md5sum.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

“If wishes were horses we’d all be eating steak.” – Jayne, Firefly


signature.asc
Description: PGP signature


Re: [gentoo-user] Switching from desktop to desktop without function keys.

2023-10-01 Thread Frank Steinmetzger
Am Sat, Sep 30, 2023 at 10:47:31PM +0100 schrieb Peter Humphrey:
> On Saturday, 30 September 2023 20:59:04 BST Dale wrote:
> 
> > Is there a way with the keyboard to switch to a desktop above 10?  Even
> > if it just switches up one at a time, that would work.  Say switch to 10
> > and then keep hitting a set of keys to go to 11, then 12, then 13 etc
> > etc.  Eventually, I get to the one I want. 
> 
> You can set up a key combination to switch one desktop to the right and 
> another to switch one to the left. It's under Shortcuts, where you select 
> KWin, then scroll the right-hand panel down to Walk through Desktop List, 
> where I've put what used to be the default value: CTRL-ALT-right. Then down 
> one to the next entry and enter CTRL-ALT-left. The defaults are left blank 
> nowadays.

Also note that a good while ago Plasma switched from using the Alt key to 
the Super key for everything Window-managerial. So these days, if you create 
a new user from scratch, it’s Ctrl+Super instead of Ctrl+Alt.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Detours widen your local knowledge.


signature.asc
Description: PGP signature


Re: [gentoo-user] Network throughput from main Gentoo rig to NAS box.

2023-09-24 Thread Frank Steinmetzger
Am Sat, Sep 23, 2023 at 05:54:21PM +0200 schrieb ralfconn:

> On 9/23/23 14:04, Dale wrote:
> > Howdy,
> > 
> > As most everyone knows, I redone my NAS box.  Before I had Truenas on it
> > but switched to Ubuntu server thingy called Jimmy.  Kinda like the
> > name.  lol  Anyway, Ubuntu has the same odd transfer pattern as the
> > Truenas box had.  I'm not sure if the problem is on the Gentoo end or
> > the Ubuntu end or something else.  I'm attaching a picture of Gkrellm so
> > you can see what I'm talking about.  It transfers a bit, then seems to
> > stop for some reason, then start up again and this repeats over and
> > over.  I'm expecting more of a consistent throughput instead of all the
> > idle time.  The final throughput is only around 29.32MB/s according to
> > info from rsync.  If it was not stopping all the time and passing data
> > through all the time, I think that would improve.  Might even double.
> > 
> > ...
> > Has anyone ever seen something like this and know why it is idle for so
> > much of the time?  Anyone know if this can be fixed so that it is more
> > consistent, and hopefully faster?
> > 
> I found a similar pattern when I checked some time ago, while transferring
> big (several Gb) files from one desktop to the other. I concluded the cause
> of the gaps was the destination PC's SATA spinning disk that needed to empty
> its cache before accepting more data. In theory the network is 1Gb/s
> (measured with iperf, it is really close to that) and the SATA is 6Gb/s so
> it should not be the limit, but I have strong doubts as how this speed is
> measured by the manufacturer.

Please be aware there is a difference between Gb and GB: one is gigabit, the 
other gigabyte. 1 Gb/s is theoretically 125 MB/s, and after deducting 
network overhead you get around 117 MB/s net bandwidth. Modern 3.5″ HDDs 
read more than 200 MB/s in their fastest areas, 2.5″ not so much. In their 
slowest region, that can go down to 50..70 MB/s.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

A peach is like an apple covered by a carpet.


signature.asc
Description: PGP signature


Re: [gentoo-user] Network throughput from main Gentoo rig to NAS box.

2023-09-24 Thread Frank Steinmetzger
Am Sat, Sep 23, 2023 at 02:30:32PM -0500 schrieb Dale:

> I read the other replies and I think it is caching the data, the drives
> writes and catches up and then it asks for more data again.

Tool tip: dstat

It puts out one line of values every x seconds (x == 1 by default).
With arguments you can tell it what to show. To see disks in action, I like 
to run the following during upgrades that involve volumous packages:

dstat --time --cpu --disk -D  --net 
--mem-adv --swap'

The cpu column includes IO wait.

The disk columns show read and write volume. If you omit the -D option, you 
will only see a total over all disks, which might still be enough for your 
use case.

The mem-adv shows how much data is in the file system write cache (the --mem 
option does not).

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Unburden your teacher; skip classes once in a while!


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: How to move ext4 partition

2023-09-20 Thread Frank Steinmetzger
Am Wed, Sep 20, 2023 at 10:57:00PM +0100 schrieb Victor Ivanov:

> On Wed, 20 Sept 2023 at 22:29, Grant Edwards  
> wrote:
> >
> > That depends on how long it takes me to decide on tar vs. rsync and
> > what the appropriate options are.
> 
> I've done this a number of times for various reasons over the last 1-2
> years, most recently a few months ago due to hard drive swap, and I
> find tar works just fine:
> 
> $ tar -cpf /path/to/backup.tar --xattrs --xattrs-include='*.*' -C / .

Does that stop at file system boundaries (because you tar up '/')? I think 
it must be, otherwise you wouldn’t use it that way.
But when copying a root file system, out of habit I first bind-mount it in a 
subdirectory and tar/rsync from there instead. This will also make files 
visible which might be hidden under an active mount.

This is not necessary if you do it from a live system, but then you wouldn’t 
tar up / in the first place.

> Likewise to extract, but make sure "--xattrs" is present
> 
> Provided backup space isn't an issue, I wouldn't bother with
> compression. It could be a lot quicker too depending on the size of
> your root partition.

Or not, depending on the speed of the backup device. ;-)
LZO compression (or zstd with a low setting) has negligible CPU cost, but 
can lower the file size quite nicely, specially with large binaries or debug 
files.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Do you steel taglines, too?


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Password questions, looking for opinions. cryptsetup question too.

2023-09-20 Thread Frank Steinmetzger
Am Wed, Sep 20, 2023 at 04:51:36PM -0400 schrieb Rich Freeman:

> > > Anyway, when I do that and use the new passwords successfully, I make a
> > > backup copy and on my rig, I can encrypt it with a right click.  I then
> > > shred the original.
> >
> > Just on a sidenote, once you’re on an SSD, shredding has no use and is
> > actually detrimental.
> >
> 
> I'm not sure I'd go quite that far, but it certainly isn't as effective.
> 
> No way to be certain how well it works, but it is certainly worth
> doing an ATA Secure Erase command on the drive.  A good SSD should
> implement that in a way that ensures all the data is actually
> unretrievable (probably by implementing full disk encryption and
> erasing the key).  Of course, there is no way to tell if the drive was
> implemented well.

Uhm, Dale was talking of a single file, not an entire disk. ;-)

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

do something for your image -- let yourself be seen with me.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Password questions, looking for opinions. cryptsetup question too.

2023-09-20 Thread Frank Steinmetzger
Am Tue, Sep 19, 2023 at 11:49:24PM -0500 schrieb Dale:

> Anyway, when I do that and use the new passwords successfully, I make a
> backup copy and on my rig, I can encrypt it with a right click.  I then
> shred the original.

Just on a sidenote, once you’re on an SSD, shredding has no use and is 
actually detrimental.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

On Mondays I feel like Robinson -- I’m waiting for Friday.


signature.asc
Description: PGP signature


Re: [gentoo-user] Password questions, looking for opinions. cryptsetup question too.

2023-09-20 Thread Frank Steinmetzger
Am Wed, Sep 20, 2023 at 01:28:09PM +0100 schrieb Michael:

> > I have a question tho.  Can a person use a password/pass phrase that is
> > like this:  'This is a stupid pass phrase.'   Does it accept that even
> > with spaces? I know file names can have spaces for a long while now but
> > way back, you couldn't do that easily.  One had to use dashes or
> > underscores.

Sure, why not? It’s a string like any other. No spaces in filenames where a 
restriction of (now outdated) file systems. And I guess developers didn’t 
account for them back in those days (and later out of habit). When I used 
DOS, of course I adhered to the 8.3 rule. But ever since I started using 
Windows, XP at the latest (2001), I wholly started using spaces everywhere 
and never looked back. The programs that had problems with spaces were few 
and script authors should just adhere to best practices and put filename 
variables in quotes, so they can work with spaces. The only nuissance they 
pose for me is it may make tab completion cumbersome sometimes.

PS.: I find underscores ugly. :D

> Generally speaking space characters are a poor choice for randomness.  I 
> recall seeing some documentary about the Enigma machine used by the German 
> military during the 2nd WW.  To minimise attempts to brute force the 
> ciphertext, they started by identifying which letter(s) were most frequently 
> used in the German language - e.g. the letter "e", then the second most 
> frequent letter and so on.  This statistical analysis approach in combination 
> with likely message content reduced the number of guesses.

Here you speak of the payload, not the passphrase, which is the encyption 
key. The key was rotated after each character and the initial key setting (the 
tumbler position) was distributed in secret code books.

> In principle, a repeated space character in your passphrase could help 
> reduce the computational burden of an offline brute force attack, by e.g. 
> helping an attacker to identify the number of individual words in a 
> passphrase.

Due to the rotation, the Enigma encoded each subsequent letter differently, 
even if the same one repeated, which was (one of) the big strengths of the 
Enigma cipher. The flaws were elsewhere, for example that a character could 
never be encrypted onto itself due to the internal wiring and certain 
message parts were always the same, like message headers and greetings.

For LUKS, having spaces in your passphrase (or their frequency) has no 
influence on the ciphertext, since the passphrase itself is not used for 
encryption. The passphrase only unlocks the actual key, which is then used 
for encryption. It comes down to whether the passphrase can easily be 
guessed by dictionary attacks. So if you write normal sentences with 
correctly written words, they might be easy to crack. I don’t expect it 
makes a big difference to the brute force software whether you use spaces or 
not.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

Suicide is the most honest form of self-criticism.


signature.asc
Description: PGP signature


Re: [gentoo-user] Computer case for new build

2023-09-19 Thread Frank Steinmetzger
Am Tue, Sep 19, 2023 at 11:01:48AM -0400 schrieb Rich Freeman:

> > > The higher-end motherboards have switches, and not all
> > > the lanes may be the highest supported generation, but I don't think
> > > any modern AMD motherboards have any kind of PCIe controller on them.
> >
> > Here are the I/O capabilities of the socket:
> > https://www.reddit.com/r/Amd/comments/bus60i/amd_x570_detailed_block_diagram_pcie_lanes_and_io/
> 
> So, that is AM4, not AM5

Yup. I kept on rambling about AM4, because that’s what I laid my eyes on 
(and so did Dale a few posts up).

> > A slight problem is that it is connected to the CPU by only 4.0×4. So tough
> > luck if you want to do parallel high-speed stuff with two PCIe×4 M.2 drives.
> 
> So, that block diagram is a bit weak.  If you look on the left side it
> clearly shows 20 PCIe lanes, and the GPU only needs 16.  So there are
> 8 lanes for the MB chipset to use.

No, the chipset downlink is always four lanes wide. PCIe 4.0 for most 
AM4 CPUs, but PCIe 3.0 for the monolithic APUs (because they don’t have 
4.0 at all, as their I/O die is different). The remaining four lanes are 
reserved for an NVMe slot.

> The 4 on the left aren't the same as the 4 on the right I think.

The diagram is indeed a bit confused in that part.

> Again, that is AM4 which I haven't looked into as much.  AM5 increases
> the v5 lanes and still has some v4 lanes.

AFAIR, PCIe 5 is only guaranteed for the NVMe slot. The rest is optional or 
subject to the chipset. As in the A series doesn’t have it, stuff like that. 
But it’s been a while since I read about that, so my memory is hazy.

> All the same desktop CPUs are a bit starved for lanes.

Hey we did get four more now with AM5 vs. AM4.

> I'm sure PCIe v5 switching is hard/expensive, but they definitely
> could mix things up however they want.  The reality is that most IO
> devices aren't going to be busy all the time, so you definitely could
> split 8 lanes up 64 ways, especially if you drop a generation or two
> along the way.

Unfortunately you can’t put low-speed connectors on a marketing sheet, when 
competitors have teh shizz.

> Server hardware definitely avoids many of the limitations, but it just
> tends to be super-expensive.

Which is funny because with the global cloud trend, you would think that its 
supply increases and prices go down.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

If you were born feet-first, then, for a short moment,
you wore your mother as a hat.


signature.asc
Description: PGP signature


Re: [gentoo-user] Computer case for new build

2023-09-19 Thread Frank Steinmetzger
Am Tue, Sep 19, 2023 at 09:17:45AM -0400 schrieb Rich Freeman:

> > Well they allow you to put larger cards in, but they don’t have the lanes
> > for it. Somewhere else in the thread was mentioned that the number of lanes
> > is very limited. Only the main slot (the big one for the GPU) is directly
> > connected to the CPU. The rest is hooked up to the chipset which itself is
> > connected to the CPU either via PCIe×4 (AMD) or whatchacallit (DMI?) for
> > Intel.
> 
> So, on most AMD boards these days all the PCIe lanes are wired to the
> CPU I believe.

Not all. Only the main slot. The rest is routed through the chipset. I’m 
only speaking of expansion slots here. But for NVMe it is similar: the 
primary one is attached to the CPU, any other is connected via the chipset. 
This is for AM4. AM5 provides two NVMes.

> The higher-end motherboards have switches, and not all
> the lanes may be the highest supported generation, but I don't think
> any modern AMD motherboards have any kind of PCIe controller on them.

Here are the I/O capabilities of the socket:
https://www.reddit.com/r/Amd/comments/bus60i/amd_x570_detailed_block_diagram_pcie_lanes_and_io/
A slight problem is that it is connected to the CPU by only 4.0×4. So tough 
luck if you want to do parallel high-speed stuff with two PCIe×4 M.2 drives.

> Basically memory, USB, and PCIe are all getting so fast that trying to
> implement a whole bunch of separate controller chips just doesn't make
> sense.

However, the CPU has a limited number of them, hence there are more in the 
chipset. Most notably SATA.

> > Look for youself and filter what you need, like 1 or 2 HDMI, DP and PCIe:
> > AM4: https://skinflint.co.uk/?cat=mbam4&xf=18869_4%7E4400_ATX
> > AM5: https://skinflint.co.uk/?cat=mbam5&xf=18869_4%7E4400_ATX
> > Interestingly: the filter goes up to 6 PCIe slots for the former, but only 
> > to
> > 4 for the latter.
> 
> You can definitely get more PCIe slots on AM5, but the trend is to
> have less in general.

Those look really weird. “Huge” ATX boards, but all covered up with fancy 
gamer-style plastics lids and only two slots poking out.

> Look at the X670 chipset boards as those tend to have PCIe switches which 
> give them more lanes.  The switched interfaces will generally not support 
> PCIe v5.

The X series are two “B-chipset chips” daisychained together to double the 
downstream connections. Meaning one sits behind the other from the POV of 
the CPU and they share their uplink.

Here are some nice block diagrams of the different AM5 chipset families:
https://www.hwcooling.net/en/amd-am5-platform-b650-x670-x670e-chipsets-and-how-they-differ/

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Greet every douche, for he may be your superior tomorrow.


signature.asc
Description: PGP signature


Re: [gentoo-user] Computer case for new build

2023-09-19 Thread Frank Steinmetzger
Am Tue, Sep 19, 2023 at 04:43:02AM -0500 schrieb Dale:
> Wols Lists wrote:

> > Oh, and to the best of my knowledge, you can combine a video card and
> > an AGPU.

BTW: it’s APU, without the G. Because it is an Accellerated Processing Unit 
(i.e. a processor), not a GPU.

> I been on Newegg using their rig builder feature.  Just to get rough
> ideas, I picked a AMD Ryzen 9 5900X 12-Core 3.7 GHz Socket AM4.  Yea, I
> did a copy and paste.  lol  It's a bit pricey but compared to my current
> rig, I think it will run circles around it.  My current rig has a AMD FX
> -8350 Eight-Core Processor running at 4GHz or so.  You think I'll see
> some speed improvement or am I on the wrong track?

Twice the single-thread performance and 7 times multi-core:
https://www.cpubenchmark.net/cpu.php?cpu=AMD+FX-8350+Eight-Core&id=1780
https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+9+5900X&id=3870
Naturally at lower power consumption as well.

> My problem is the mobo.  I need a few PCIe slots.  Most just don't have
> enough.  Most have a slot for a video card.  Then maybe 2 other slightly
> slower ones and maybe one slow one.  I can't recall what the names are
> at the moment. I know the length of the connector tends to tell what
> speed it is, tho some cheat and put long connectors but most of the
> faster pins aren't used.  That confuses things.

Well they allow you to put larger cards in, but they don’t have the lanes 
for it. Somewhere else in the thread was mentioned that the number of lanes 
is very limited. Only the main slot (the big one for the GPU) is directly 
connected to the CPU. The rest is hooked up to the chipset which itself is 
connected to the CPU either via PCIe×4 (AMD) or whatchacallit (DMI?) for 
Intel.

> Anyway, mobo, which I
> will likely change, CPU and memory is already adding up to about $600. 
> I don't need much of a video card tho.  The built in thing may be
> enough, as long as I can connect my monitor and TV.

The 5900X has no built-in. For the Ryzen 5000 series, only those with -G 
have graphics. The 7000 ones all have a basic GPU (may except for some with 
another suffix).

> If someone knows of a good mobo, Gigabyte, ASUS preferred, that has
> several PCIe slots, I'd like to know the model so I can check into it. 
> It's doesn't have to be the latest thing either.  I tend to drop down
> several notches from the top to save money.  I still end up with a
> pretty nice rig and save some money.

Look for youself and filter what you need, like 1 or 2 HDMI, DP and PCIe:
AM4: https://skinflint.co.uk/?cat=mbam4&xf=18869_4%7E4400_ATX
AM5: https://skinflint.co.uk/?cat=mbam5&xf=18869_4%7E4400_ATX
Interestingly: the filter goes up to 6 PCIe slots for the former, but only to 
4 for the latter.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Wires are either too short, not available or don’t work.


signature.asc
Description: PGP signature


Re: [gentoo-user] Computer case for new build

2023-09-19 Thread Frank Steinmetzger
Am Tue, Sep 19, 2023 at 01:01:42AM -0500 schrieb Dale:

> They have added a lot of stuff to mobos since I bought one about a
> decade ago.  Maybe things have improved.  I just like PCIe slots and
> cards.  Gives me more options.

I definitely know the feeling. That is why I went with µATX instead of ITX 
nine years ago. I thought “now that I have a beefy machine, I could get a 
sound card and start music production” and stuff like that. It never 
happened. Aside from an entry-level GPU for some gaming (which broke two 
years ago, so I am back on Intel since then) I never used any of my slots. 
But in the end, they are — as you say yourself — options, not necessities.

> Given how things have changed tho, I may
> have to give in on some things.  I just like my mobos to be like Linux. 
> Have something do one thing and do it well.  When needed, change that
> thing.  ;-) 

Over the past years, boards tend to do less and less by themselves. It’s all 
been migrated into the CPU; voltage regulation, basic graphics, memory 
controller, lots of I/O. The chipset (at least in AMD land, I’ve been out of 
touch with Intel for a while now) basically determines the amount of 
*additional* I/O. The Deskmini X300 mini-PC that I mentioned earlier 
actually has no chipset on its board, everything is done by the CPU.

What irks me is again market segmentation. Even though Ryzen CPUs have the 
capability of 10 Gbps USB 3.1 Gen 2 built-in, the low-end boards do not 
route that out, not even at least one.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

The only thing that makes some people bearable is their absence.


signature.asc
Description: PGP signature


Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Frank Steinmetzger
Am Mon, Sep 18, 2023 at 06:40:52PM -0500 schrieb Dale:

> >> I tend to need quite a few PCIe slots.  I like to have my own video
> >> card.  I never liked the built in ones.
> > You’re just asking to be asked. ;-) Why don’t you like them? (I fear I may 
> > have asked that before).
> >
> > I get it when you wanna do it your way because it always worked™ (which is 
> > not wrong — don’t misunderstand me) and perhaps you had some bad experience 
> > in the past. OTOH it’s a pricey component usually only needed by gamers and 
> > number crunchers. On-board graphics are just fine for Desktop and even 
> > (very) light gaming and they lower power draw considerably. Give it a 
> > swirl, 
> > maybe you like it. :) Both Intel and AMD work just fine with the kernel 
> > drivers.
> 
> Well, for one, I usually upgrade the video card several times before I
> upgrade the mobo.  When it is built in, not a option.  I think I'm on my
> third in this rig.
>
> I also need multiple outputs, two at least.

That is not a problem with iGPUs. The only thing to consider is the type of 
video connectors on the board. Most have two classical ones, some three, 
divided among HDMI and DP. And the fancy ones use USB-C with DisplayPort 
alternative mode. Also, dGPUs draw a lot more when using two displays.

> One for
> monitor and one for TV.  My little NAS box I'm currently using is a Dell
> something.  The video works but it has no GUI.  At times during the boot
> up process, things don't scroll up the screen.  I may be missing a
> setting somewhere but when it blanks out, it comes back with a different
> resolution and font size.

In case you use Grub, it has an option to keep the UEFI video mode.
So there would be no switching if UEFI already starts with the proper 
resolution.

> My Gentoo box doesn't do that.  I can see the screen from BIOS all the
> way to when it finishes booting and the GUI comes up.  I'm one of those
> who watches.  ;-)

Yeah, and it’s neat if there is no flickering or blanking. So modern and 
clean.

> >> Figure the case is a
> >> good place to start.  Mobo, CPU and such next.  Figure mobo will pick
> >> memory for me since usually only one or two will work anyway. 
> > One or two what?
> 
> One or two types of memory.  Usually, plain or ECC.  Mobos usually are
> usually pretty picky on their memory. 

Hm… while I haven’t used that many different components in my life, so far 
I have not had a system not accept any RAM. Just stick to the big names, I 
guess.

> >> Since no one mentioned a better case, that Define thing may end up being
> >> it.  That Gamemax is cheaper but a lot less drive capacity.  Heck, when
> >> I bought my current case, which has space for five 3.5" and six 5 1/4"
> >> drives, I thought I'd never fill up just the 3.5" ones.  Now, the 3.5"
> >> ones have been full for a while and the 5 1/4" are about full too.
> > Full with ODDs? Or drive cages? You can get 3×3.5″ cages which install into 
^

That should have been 5×3.5″. Too many threes and fives floatin’ around in 
my head and it’s getting late.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The majority of people have an above-average number of legs.


signature.asc
Description: PGP signature


Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Frank Steinmetzger
Am Mon, Sep 18, 2023 at 02:20:56PM -0500 schrieb Dale:

> >> […]
> >> The downside, only micro ATX and
> >> mini ITX mobo.  This is a serious down vote here.
> > Why is that bad? µATX comes with up to four PCIe slots. Even for ten 
> > drives, 
> > you only need one SATA expander (with four or six on-board). Perhaps a fast 
> > network card if one is needed, that makes two slots. You don’t get more RAM 
> > slots with ATX, either. And, if not anything else, a smaller board means 
> > (or can mean) lower power consumption and thus less heat.
> >
> > Speaking of RAM; might I interest you in server-grade hardware? The reason 
> > being that you can then use ECC memory, which is a nice perk for storage.¹ 
> > Also, the chance is higher to get sufficient SATA connectors on-board 
> > (maybe 
> > in the form of an SFF connector, which is actually good, since it means 
> > reduced “cable salad”).
> > AFAIK if you have a Ryzen PRO, then you can also use a consumer-grade 
> > board, 
> > because they too support ECC. And DDR5 has basic (meaning 1 bit and 
> > transparent to the OS) ECC built-in from the start.
> 
> I tend to need quite a few PCIe slots.  I like to have my own video
> card.  I never liked the built in ones.

You’re just asking to be asked. ;-) Why don’t you like them? (I fear I may 
have asked that before).

I get it when you wanna do it your way because it always worked™ (which is 
not wrong — don’t misunderstand me) and perhaps you had some bad experience 
in the past. OTOH it’s a pricey component usually only needed by gamers and 
number crunchers. On-board graphics are just fine for Desktop and even 
(very) light gaming and they lower power draw considerably. Give it a swirl, 
maybe you like it. :) Both Intel and AMD work just fine with the kernel 
drivers.

> I also have never had a good built in network port to work right either.  
> Every one of them always had problems if they worked at all.

I faintly remember a thread about that from long ago. But the same thought 
applies: in case you buy a new board, give it a try. Keep away from Intel 
I225-V though, that 2.5 GbE chip has a design flaw but manufacturers still 
use int.

> I also need PCIe slots for SATA expander cards.

That’s the use case I mostly thought of. Irritatingly, I just looked at my 
price comparison site for SATA expansion cards and all 8×SATA cards are PCIe 
2.0 with either two or even just one lane. -_- So not even PCIe 3.0×1, which 
is the same speed as 2.0×2 but would fit in a ×1 slot which many boards 
have in abundance.

2.0×2 is about 1 GB/s. Divided by 8 drives gives you 125 MB/s/drive.

> If I use
> the Define case, I'd like to spread that across at least two cards,
> maybe three.  So, network, video and at least a couple SATA cards,
> adding up fast.  Sometimes, I wouldn't mind having the larger ATX with
> extra PCIe slots.  Thought about having SAS cards and cables that
> convert to SATA.  I think they do that.  That may make it just one
> card.  I dunno.  I haven't dug deep into that yet.

After the disappointment with the SATA expanders I looked at SAS cards.
They are well connected on the PCIe side (2.0×8 or 3.0×8) and they are 
compatible with SATA drives. I found an Intel SAS card with four SFF 
connectors (meaning 16 drives!) for a little over 100 €. It’s called 
RMSP3JD160J. I don’t know why it is so cheap, though. Because the 
second-cheapest competitor is already at 190 €.

> Figure the case is a
> good place to start.  Mobo, CPU and such next.  Figure mobo will pick
> memory for me since usually only one or two will work anyway. 

One or two what?

> > I was going to upgrade my 9 years old Haswell system at some point to a new 
> > Ryzen build. Have been looking around for parts and configs for perhaps two 
> > years now but I can’t decide (perhaps some remember previous ramblings 
> > about 
> > that). Now I actually consider buing a tiny Deskmini X300 after I found out 
> > that it does support ACPI S3, but only with a specific UEFI version. No 
> > 10-gig USB and only 1-gig ethernet though. But it’s cute and small. :)
> 
> I thought about using a Raspberry Pi for a NAS box.  Just build more
> than one of them.  Thing is, finding the parts for it is almost
> impossible right now.  They kinda went away a couple years ago when
> things got crazy. 

I was talking main PC use case, not NAS. :)
The minimalist form factor doesn’t really impede me. I don’t have any HDDs 
in my PC anymore (too noisy), so why keep space for it. And while I do like 
to game a little bit, I find a full GPU too expensive and hungry, because it 
will be bored most of the time.

The rest can be done with USB, which is the only thing a compact case often 
lacks in numbers.

> Since no one mentioned a better case, that Define thing may end up being
> it.  That Gamemax is cheaper but a lot less drive capacity.  Heck, when
> I bought my current case, which has space for five 3.5" and six 5 1/4"
> drives, I thought I'd never fill up just

Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Frank Steinmetzger
Am Mon, Sep 18, 2023 at 02:59:22PM -0400 schrieb Rich Freeman:

> > I have a four-bay NAS with server board (ASRock Rack E3C224D2I), actually my
> > last surviving Gentoo system. ;-) With IPMI-Chip (which alone takes several
> > watts), 16 GiB DDR3-ECC, an i3-4170 and 4×6 TB, it draws around 33..35 W
> > from the plug at idle — that is after I enabled all powersaving items in
> > powertop. Without them, it is around 10 W more. It has two gigabit ports
> > (plus IPMI port) and a 300 W 80+ gold PSU.
> 
> That's an ITX system though, and a very old one at that.

Well, you asked for entry-point server hardware with low idle consumption. 
;-)

I built it in November 2016. Even then it was old componentry, but I wanted 
to save €€€ and it was enough for my needs. I installed a Celeron G1840 for 
33 € because I thought it would be enough. I tested its AES performance 
beforehand (because it didn’t have AES-NI) and with 155 MB/s it was enough 
to saturate GbE. But since I ran ZFS on LUKS at the time (still do, until I 
change the setup for more capacity), I ran into a bottleneck during scrubs. 
So after a year, I paid over 100 € for the i3 which I should have bought 
from the get-go. :-/

> Not sure how
> useful more PCIe lanes are in a form factor like that.

Modern boards might come with NVMe slots that can be re-purposed for 
external cards.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The more cheese, the more holes.  The more holes, the less cheese.
Ergo: the more cheese, the less cheese!


signature.asc
Description: PGP signature


Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Frank Steinmetzger
Am Mon, Sep 18, 2023 at 07:16:17AM -0400 schrieb Rich Freeman:

> On Mon, Sep 18, 2023 at 6:13 AM Frank Steinmetzger  wrote:
> >
> > Am Mon, Sep 18, 2023 at 12:17:20AM -0500 schrieb Dale:
> > > […]
> > > The downside, only micro ATX and
> > > mini ITX mobo.  This is a serious down vote here.
> >
> > Why is that bad? µATX comes with up to four PCIe slots. Even for ten drives,
> > you only need one SATA expander (with four or six on-board). Perhaps a fast
> > network card if one is needed, that makes two slots.
> 
> Tend to agree.  The other factor here is that desktop-oriented CPUs
> tend to not have a large number of PCIe lanes free for expansion
> slots, especially if you want 1-2 NVMe slots.  (You also have to watch
> out as the lanes for those can be shared with some of the expansion
> slots so you can't use both.)
> 
> If you want to consider a 10GbE+ card I'd definitely get something
> with integrated graphics,

That is a recommendation in any case. If you are a gamer, you have a 
fallback in case the GPU kicks the bucket. And if not, your power bill goes 
way down.

> because a NIC is going to need a 4-8x port
> most likely 

Really? PCIe 3.0 has 1 GB/s/lane, that is 8 Gbps/lane, so almost as much as 
10 GbE. OTOH, 10 GbE is a major power sink. Granted, 1 GbE is not much when 
you’re dealing with numerous TB. And then there is network over thunderbolt, 
of which I only recently learned. But this is probably very restricted in 
length. Which will also be the case for 10 GbE, so probably no options for 
the outhouse. :D

> > Speaking of RAM; might I interest you in server-grade hardware? The reason
> > being that you can then use ECC memory, which is a nice perk for storage.
> 
> That and way more PCIe lanes.  That said, it seems super-expensive,
> both in terms of dollars, and power use.  Is there any entry point
> into server-grade hardware that is reasonably priced, and which can
> idle at something reasonable (certainly under 50W)?

I have a four-bay NAS with server board (ASRock Rack E3C224D2I), actually my 
last surviving Gentoo system. ;-) With IPMI-Chip (which alone takes several 
watts), 16 GiB DDR3-ECC, an i3-4170 and 4×6 TB, it draws around 33..35 W 
from the plug at idle — that is after I enabled all powersaving items in 
powertop. Without them, it is around 10 W more. It has two gigabit ports 
(plus IPMI port) and a 300 W 80+ gold PSU.

> > I was going to upgrade my 9 years old Haswell system at some point to a new
> > Ryzen build. Have been looking around for parts and configs for perhaps two
> > years now but I can’t decide (perhaps some remember previous ramblings about
> > that).
> 
> The latest zen generation is VERY nice, but also pretty darn
> expensive.  Going back to zen3 might get you more for the money,
> depending on how big you're scaling up.

I’ve been looking at Zen 3 the whole time, namely the 5700G APU. 5 times the 
performance of my i5, for less power, and good graphics performance for the 
occasional game. I’m a bit paranoid re. Zen 4’s inclusion of Microsoft 
Pluton (“Chip-to-Cloud security”) and Zen 4 in gereral has higher idle 
consumption. But now that Phoenix, the Zen 4 successor to the 5700G, is 
about to become available, I am again hesitant to pull the trigger, waiting 
for the pricetag.

> A big part of the cost of
> zen4 is the motherboard, so if you're building something very high end
> where the CPU+RAM dominates, then zen4 may be a better buy.

I’m fine with middle-class. In fact I always thought i7s to be overpriced 
compared to i5s. The plus in performance of top-tier parts is usually bought 
with disproportionately high power consumption (meaning heat and noise).

> If you just want a low-core system then you're paying a lot just to get
> started.

I want to get the best bang within my constraints, meaning the 5700G (
8 cores). The 5600G (6 cores) is much cheaper, but I want to get the best 
graphics I can get in an APU. And I am always irked by having 6 cores (12 
threads), because it’s not a power of 2, so percentages in load graphs will 
look skewed. :D

> The advantage of
> distributed filesystems is that you can build them out of a bunch of
> cheap boxes […]
> When you start getting up to a dozen drives the cost of getting them
> to all work on a single host starts going up.  You need big cases,
> expansion cards, etc.  Then when something breaks you need to find a
> replacement quickly from a limited pool of options.  If I lose a node
> on my Rook cluster I can just go to newegg and look at $150 used SFF
> PCs, then install the OS and join the cluster and edit a few lines of
> YAML and the disks are getting formatted...

For a simple media storage, I personally would find this too cumbersome to 
manage. Especially if yo

Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Frank Steinmetzger
Am Mon, Sep 18, 2023 at 12:17:20AM -0500 schrieb Dale:
> Howdy,
> […]
> I've found a few cases that peak my interest depending on which way I go
> with this.  One I found that has a lot of hard drive space and would
> make a descent NAS box, the Fractal Design Node 804.  It's a cube shaped
> thing but can hold a LOT of spinning rust. 10 drives plus I think space
> for a SSD for the OS as well.

These days you can always put your OS on an NVMe; faster access and two 
fewer cables in the case (or one more slot for a data drive).

> […]
> The downside, only micro ATX and
> mini ITX mobo.  This is a serious down vote here.

Why is that bad? µATX comes with up to four PCIe slots. Even for ten drives, 
you only need one SATA expander (with four or six on-board). Perhaps a fast 
network card if one is needed, that makes two slots. You don’t get more RAM 
slots with ATX, either. And, if not anything else, a smaller board means 
(or can mean) lower power consumption and thus less heat.

Speaking of RAM; might I interest you in server-grade hardware? The reason 
being that you can then use ECC memory, which is a nice perk for storage.¹ 
Also, the chance is higher to get sufficient SATA connectors on-board (maybe 
in the form of an SFF connector, which is actually good, since it means 
reduced “cable salad”).
AFAIK if you have a Ryzen PRO, then you can also use a consumer-grade board, 
because they too support ECC. And DDR5 has basic (meaning 1 bit and 
transparent to the OS) ECC built-in from the start.

> I was hoping to turn
> my current rig into a NAS.  The mobo and such parts.  This won't be a
> option with this case.  Otherwise, it gives ideas on what I'm looking
> for.  And not.  ;-)

I was going to upgrade my 9 years old Haswell system at some point to a new 
Ryzen build. Have been looking around for parts and configs for perhaps two 
years now but I can’t decide (perhaps some remember previous ramblings about 
that). Now I actually consider buing a tiny Deskmini X300 after I found out 
that it does support ACPI S3, but only with a specific UEFI version. No 
10-gig USB and only 1-gig ethernet though. But it’s cute and small. :)

> Another find.  The Fractal Design Define 7 XL.  This is more of a tower
> type shape like my current rig.  I think I read with extra trays, it can
> hold up to 18 drives.  One could have a fancy RAID setup and still have
> huge storage space with that.  I think it also has SSD spots for drives
> that could hold the OS itself.  This one is quite pricey tho.

With so many drives, you should also include a pricey power supply. And/or a 
server board which supports staggered spin-up. Also, drives of the home NAS 
category (and consumer drives anyways) are only certified for operation in 
groups of up to 8-ish. Anything above and you sail in grey warranty waters. 
Higher-tier drives are specced for the vibrations of so many drives (at 
least I hope, because that’s what they™ tell us).

> To be honest, I kinda like the Fractal Design Define 7
> XL right now despite the higher cost.  I could make a NAS/backup box
> with it and I doubt I'd run out of drive space even if I started using
> RAID and mirrored everything, at a minimum.

With 12 drives, I would go for parity RAID with two parity drives per six 
drives, not for a mirror. That way you get 2/3 storage efficiency vs. 1/2 
and more robustness; in parity, any two drives may fail, but in a cluster of 
mirrors, only specific drives may fail (not two of the same mirror). If the 
drives are huge, nine drives with three parity drives may be even better 
(because rebuilds get scarier the bigger the drives get).

> 9 pairs of say 18TB drives
> would give around 145TBs of storage with a file system on it.

If you mirrored them all, you’d get 147 TiB. But as I said, use nine drives 
with a 3-drive parity and you get 98 TiB per group. With two groups 
(totalling 18 drives), you get 196 TiB. Wh!


¹ There was once a time when ECC was supported by all boards and CPUs. But 
then someone invented market segmentation to increase profits through 
upselling.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Skype jokes are oftentimes not understood, even when they’re repeated.


signature.asc
Description: PGP signature


Re: [gentoo-user] Certain packages refuse to use binary from build save, the -k thing.

2023-09-18 Thread Frank Steinmetzger
Am Fri, Sep 15, 2023 at 11:44:03PM -0500 schrieb Dale:
> Howdy,

Hi

instead of going berserk mode and wasting kWh on rebuilding “just in case it 
might help”, why not try and dig a little deeper.

> A couple of my video players are not playing videos correctly.

Whicch players?

> I've
> rebuilt a few things but no change.  Some work fine, some give a error
> about a bad index or something, some just don't even try at all.

What is the exact error message?
Can you remux the file with ffmpeg?
`ffmpeg -i Inputfile.ext -c copy -map 0 Outputfile.ext`
This will take all streams of the input file and put them into a new file 
without re-encoding. Does ffmpeg give any errors about bad data? Can you 
play the produced file?

> The videos come from various sources and are of different file extensions. 

Can we have a look at some of those videos? The least I can try is to see 
whether they work here or show any sign of corruption.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Polymorphism is a multiform topic.  (SelfHTML forum)


signature.asc
Description: PGP signature


Re: [gentoo-user] long compiles

2023-09-13 Thread Frank Steinmetzger
Am Wed, Sep 13, 2023 at 12:50:20PM +0100 schrieb Wols Lists:

> Bear in mind a lot of systems are thermally limited and can't run at full
> pelt anyway ...

Usually those are space-constrained systems like mini PCs or laptops. 
Typical Desktops shouldn’t be limited; even the stock CPU coolers should be 
capable of dissipating all heat, as long as the case has enough air flow.

> You might find it's actually better (and more efficient) to run at lower
> loading. Certainly following the kernel lists you get the impression that
> the CPU regularly goes into thermal throttling under heavy load, and also
> that using a couple of cores lightly is more efficient than using one core
> heavily.

At least very current CPUs tend to go into so high clock speeds that they 
become quite inefficient. If you set a 105 W Ryzen 7700X to 65 W eco mode in 
the BIOS (which means that the actual maximum intake goes down from 144 W to 
84 W), you reduce consuption by a third, but only lose ~15 % in performance.

At very low figures (15 W), Ryzen 5000 and 7000 CPUs are almost as efficient 
as Apple M1.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Einstein is dead.  Newton is dead.  I’m feeling sick, too.


signature.asc
Description: PGP signature


Re: [gentoo-user] TrueNAS not helping me now.

2023-09-06 Thread Frank Steinmetzger
Am Wed, Sep 06, 2023 at 02:45:11PM -0500 schrieb Dale:

> Oh, creating a
> vdev was the trick.  Once that is done, expand the pool.  It's one of
> those, once it is done, it seems easy.  ROFL

Note that people used to shoot themselves in the foot when lazily (or by 
accident) adding a single disk to an existing pool. If that pool was 
composed of RAID vdevs, then now they had a non-redundant single disk in 
that pool and it was not possible to remove a vdev from a pool! That 
single-disk vdev could only be converted to a mirror to at least get 
redundancy back.

The only proper solution was to destroy the pool and start from scratch. By now 
there is a partial remedy, in that it is possible to remove mirror vdevs from a 
pool. But no RAIDs:
https://forum.level1techs.com/t/solved-how-to-remove-vdev-from-zpool/192044/5
https://arstechnica.com/civis/threads/performance-when-removing-zfs-vdevs-with-zpool-remove.1481148/post-40491873
And you get some left-over metadata about the removed vdev.

> I guess vdev is like LVMs pv, physical volume I think it is.

Haven’t we had this topic before? At least twice? Including the comparison 
between 
the three layers of LVM with their equivalent in ZFS land. ;-)

ZFS is more meant for static setups, not constantly changing disk loadouts 
of varying disk sizes.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The boss is a human just like everyone else, he just doesn’t know.


signature.asc
Description: PGP signature


Re: [gentoo-user] Email clients

2023-07-30 Thread Frank Steinmetzger
Am Sat, Jul 29, 2023 at 01:53:21AM -0400 schrieb Philip Webb:
> 230729 Peter Humphrey wrote:
> > I've been a loyal user of KMail for many years.
> > Claws mail is often mentioned hereabouts and I'd like to try it,
> > but first I'd need to export KMail's 20-odd-year maildir history
> > to mbox format.
> 
> I recommend a look at Mutt, which I've used very happily since  c 1998 ,
> well before Gentoo existed.  I've also always used Mbox, not Maildir.
> Powerful, configurable, but also simple : the UNIX approach.

When I had kmail issues back in the day of early akonadi times (remember 
Alan’s thread about data loss from then?), I tried out mutt and I’ve been 
using it ever since. I configured it to my liking re. list layout, sidebar, 
shortcuts, editing and so on.

I still use KMail these days, quite often too. But it has a few drawbacks 
and annoying little bugs that I encounter regularly, which is one reason for 
staying with mutt. Another is that mutt is much much faster when dealing 
with big directories such as lists. Still, there is no better graphical 
alternative in KDE land. Thunderbird & Co don’t fit in optically, Trojita is 
too limited.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

When things went bonkers for the captain, he had the entire ship jettisoned.


signature.asc
Description: PGP signature


Re: [gentoo-user] net-misc/x2goserver-4.1.0.3-r2

2023-07-16 Thread Frank Steinmetzger
Am Sat, Jul 15, 2023 at 08:50:47AM -0600 schrieb the...@sys-concept.com:
> x2goserver emerged with notice:
> 
> ERROR: preinst
> 
> Installation of a symlink is blocked by a directory:
>   '/etc/x2go/xinitrc.d'
> This symlink will be merged with a different name:
>   '/etc/x2go/xinitrc.d.backup.'
> 
> What does it mean?

Probably that it wants to install a symlink called /etc/x2go/xinitrc.d, but 
you already have a directory at this location. And now you have this symlink 
with the stated different name.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

“If you enjoy wasting time, is that time really wasted?” – Philosoraptor


signature.asc
Description: PGP signature


Re: [gentoo-user] Instrumenting emerges

2023-07-13 Thread Frank Steinmetzger
Am Sat, Jul 08, 2023 at 12:42:13PM -0300 schrieb David M. Fellows:

> while [ true ] ; do cat /proc/loadavg |logger; sleep 60; done

A spec more elegant:

while sleep 60; do ... ; done

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Taglines are like cars - You get a good one, then someone nicks it.


signature.asc
Description: PGP signature


Re: [gentoo-user] Update requires restarting init process

2023-07-02 Thread Frank Steinmetzger
Am Sun, Jul 02, 2023 at 11:51:52AM -0500 schrieb Dale:
> Howdy,
> 
> Just a heads up.  I almost missed this in all the other normal messages
> emerge spits out.  I think this was because of a glibc update.  Anyway,
> if you update your system, look for a message about restarting the init
> process to avoid problems.  Rebooting is one way to do this of course
> BUT if you can't or like me, don't like, to reboot, you can login as
> root and run this:  telinit u   Note, there is no dash in front of the
> u.  It doesn't work with a dash there either.  To make sure all
> processes restarted, I went to boot runlevel, then single and then
> reversed that to get back to default runlevel. 

Out of curiosity: going to boot and single runlevel requires you to log out 
and basically close all running programs anyway, right? So why not simply 
reboot then (unless you have something lying around in ramdisks)? Do you not 
like the uptime resetting? :D

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

If your laundry just wouldn’t become white,
may be that’s because it is coloured.


signature.asc
Description: PGP signature


Re: [gentoo-user] Invalid atom in /etc/portage/package.unmask: =app-text/evince-44.1 ~amd64

2023-05-02 Thread Frank Steinmetzger
Am Tue, May 02, 2023 at 01:37:50PM -0600 schrieb the...@sys-concept.com:
> Trying to emerge evince-44.1 but I get:
> 
> Invalid atom in /etc/portage/package.unmask: =app-text/evince-44.1 ~amd64
> 
> What is it looking for?

It is not looking for the ~amd64 at the end of the line.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

If two processes are running concurrently,
the less important will take processor time away from the more important one.


signature.asc
Description: PGP signature


Re: [gentoo-user] file system for new machine

2023-04-30 Thread Frank Steinmetzger
Am Sun, Apr 30, 2023 at 10:03:01AM +0800 schrieb William Kenworthy:

> > > That
> > > said btrfs has its less than stellar moments.  I still have systems that 
> > > use
> > > ext4 and they "seem" reliable for light duty but I make sure I have 
> > > backups
> > > and do not trust them with anything important - been bitten too many 
> > > times!
> > In what kind of situations did you encounter these problems?
> 
> Also a fixed number of nodes. on creation (annoying and sometimes 
> disastrous when it runs out - think lots of small files like mail 
> storage),

That would be my biggest concern, especially back in the day when I had 
rather limited hardware resources. I was “haggling” with myself as to how 
many inodes I would really need. These days I’m more generous, but still 
modify the inodes count when formatting a partition. See Dale’s recent SSD 
thread.

> power outages cause what seems like silent corruption that builds up.  I 
> will admit ext4 does seem better these days but I am not a fan.

OK, that I’ve never had. Maybe a few forced shutdowns because the machine 
hung up (e.g. memory full or a botched wake from suspend).

> How do you find f2fs? - I lose (wear out I guess) SD cards on raspberry pi
> and Odroid systems on a regular basis with any of the mainstream filesystems
> - using them as a boot drive only extends their life, but that's not always
> possible.

Well, no problems so far. But I’m not stress-testing it, it just runs™. The 
Pi is just a simple pihole/radicale/nextcloud server with not much traffic 
and the data card in my surface just holds my music collection. The only 
“issue” I currently encounter is some warning messages on Arch when I do a 
system update. I can’t remember the exact error, but it’s just a warning 
about some feature.

However:
The Arch wiki says: “F2FS has a weak fsck that can lead to data loss in case 
of a sudden power loss [3][4]. If power losses are frequent, consider an 
alternative file system.“

OTOH, Google is now using f2fs in Android data partitions. Before that, it 
was ext4. :-)

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

After humans ceased to be apes, they became Egyptians.


signature.asc
Description: PGP signature


Re: [gentoo-user] file system for new machine

2023-04-29 Thread Frank Steinmetzger
Am Sat, Apr 29, 2023 at 02:04:52PM +0100 schrieb Michael:
> On Saturday, 29 April 2023 12:45:31 BST Frank Steinmetzger wrote:
> > Am Sat, Apr 29, 2023 at 01:20:52PM +0800 schrieb William Kenworthy:
> 
> > > That
> > > said btrfs has its less than stellar moments.  I still have systems that
> > > use ext4 and they "seem" reliable for light duty but I make sure I have
> > > backups and do not trust them with anything important - been bitten too
> > > many times!
> > In what kind of situations did you encounter these problems?
> 
> Can't speak for William, but it was a case where using older/early versions 
> of 
> btrfs tools from some live-USB you found at the bottom of your bin of spares 
> could cause worse damage and data loss on btrfs.  I recall the devs 
> recommending to always use the latest version if you were attempting a 
> recovery of a damaged fs and seek advice if in doubt.

I was asking about his data loss with ext4. ;-)

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

ATARI! We make top-notch toasters affordable!


signature.asc
Description: PGP signature


Re: [gentoo-user] file system for new machine

2023-04-29 Thread Frank Steinmetzger
Am Sat, Apr 29, 2023 at 01:20:52PM +0800 schrieb William Kenworthy:

> Filesystem choice is very much to do with your particular use case.
> 
> I am not a fan of ext4 - lost too much data too many times.  I ve found
> btrfs and xfs much tougher, and the online tools much more convenient.

I’ve been using ext4 possibly (don’t know for sure) since it was available 
in standard Gentoo land. I cannot remember ever having suffered data loss.

These days I like to experiment with more flash-friendly systems like f2fs, 
which I use on the MicroSD card of my raspberry and the 400 GB data MicroSD 
in my Surface Go tablet. I also test-drive it on my mini desktop PC (all 
Arch linux) because, like all my machines, it has an SSD.

> That
> said btrfs has its less than stellar moments.  I still have systems that use
> ext4 and they "seem" reliable for light duty but I make sure I have backups
> and do not trust them with anything important - been bitten too many times!

In what kind of situations did you encounter these problems?

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Eating is the most important meal of the day.


signature.asc
Description: PGP signature


Re: [gentoo-user] file system for new machine

2023-04-29 Thread Frank Steinmetzger
Am Thu, Apr 27, 2023 at 04:58:02PM +0200 schrieb tastytea:

> > Does the transparent compression incur an overhead cost in processing,
> > memory use, or disk writes?  I feel like it certainly has to at least
> > use more memory.  Sorry if that's an RTFM question.
> 
> it'll use more cpu and memory, but disk writes and reads will be lower,
> because it compresses it on the fly.

The lzo algorithm which is used by default incurs a negligible performance 
penalty. Give it a try: take some big file, e.g. a video and then:
(with $FILE being the name of the file to compress)

Compression-optimised algorithms:
time gzip -k $FILE  # will take long with medium benefit
time xz -k $FILE# will take super long
time bzip2 -k $FILE # will take also long-ish

Runtime-optimised algorithms:
time lz -k $FILE# will go very very fast, but compression is relat. low
time zstd $FILE # will go fast with better compression (comp. effort 3)
time zstd -6 $FILE  # will go fast-ish with more compression

> it should detect early if a file is not compressible and stop.

AFAIK, zfs compresses the beginning of a file and only if that yields a 
certain benefit, the entire file will be compressed.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

The realist knows what he wants; the idealist wants what he knows.


signature.asc
Description: PGP signature


Re: [gentoo-user] Nvidia-drivers fails to patch

2023-04-21 Thread Frank Steinmetzger
Am Thu, Apr 20, 2023 at 08:33:22PM -0500 schrieb Dale:

> I cleared the tmp files to give it a fresh start.  It still failed.  The
> directory and files it complains about being missing, they are.  I went
> to the ebuild to see what patches are supposed to be installed.  This is
> the part of the ebuild. 
> 
> 
> PATCHES=(
>     "${FILESDIR}"/nvidia-drivers-470.141.03-clang15.patch
>     "${FILESDIR}"/nvidia-modprobe-390.141-uvm-perms.patch
>     "${FILESDIR}"/nvidia-settings-390.144-desktop.patch
>     "${FILESDIR}"/nvidia-settings-390.144-no-gtk2.patch
>     "${FILESDIR}"/nvidia-settings-390.144-raw-ldflags.patch
> )
> 
> 
> As you can see, it wants to apply patches from several versions so while
> odd, I guess it really does it that way.  I suspect given the age of the
> drivers that the patches no longer exist or something.  I'd think it
> would report it couldn't download the files but maybe not.  I may be
> running out of luck here.  Odd thing is, it compiled a while back. 

If I read your error output correctly, it’s not that the patch file is 
missing, but that a file that is mentioned inside the patch is.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Error: this virus requires DirectX and 64 MB of RAM!


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-20 Thread Frank Steinmetzger
Am Thu, Apr 20, 2023 at 04:29:59AM -0500 schrieb Dale:

> >> I wonder.  Is there a way to find out the smallest size file in a
> >> directory or sub directory, largest files, then maybe a average file
> >> size???
> > The 20 smallest:
> > `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`
> >
> > The 20 largest: either use tail instead of head or reverse sorting with -r.
> > You can also first pipe the output of stat into a file so you can sort and 
> > analyse the list more efficiently, including calculating averages.
> 
> When I first run this while in / itself, it occurred to me that it
> doesn't specify what directory.  I thought maybe changing to the
> directory I want it to look at would work but get this: 

Yeah, either cd into the directory first, or pass it to find. But it’s like 
tar: I can never remember in which order I need to feed stuff to find. One 
relevant addition could be -xdev, to have find halt at file system 
boundaries. So:

find /path/to/dir -xdev -type f -! -type l …

> root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
> -0 stat -c '%s %n' | sort -n | head -n 20`
> -bash: 2: command not found
> root@fireball /home/dale/Desktop/Crypt #

I used the `` in the mail text as a kind of hint: “everything between is a 
command”. So when you paste that into the terminal, it is executed, and the 
result of it is substituted. Meaning: the command’s output is taken as the 
new input and executed. And since the first word of the output was “2”, you 
get that error message. Sorry about the confusion.

> >> I thought about du but given the number of files I have here,
> >> it would be a really HUGE list of files.  Could take hours or more too. 
> > I use a “cache” of text files with file listings of all my external drives. 
> > This allows me to glance over my entire data storage without having to plug 
> > in any drive. It uses tree underneath to get the list:
> >
> > `tree -afx -DFins --dirsfirst --du --timefmt "%Y-%m-%d %T"`
> >
> > This gives me a list of all directories and files, with their full path, 
> > date and size information and accumulated directory size in a concise 
> > format. Add -pug to also include permissions.
> >
> 
> Save this for later use.  ;-)

I built a wrapper script around it, to which I pass the directory I want to 
read (usually the root of a removable media). The script creates a new text 
file, with the current date and the dircetory in its name, and compresses it 
at the end. This allows me to diff those files in vim and see what changed 
over time. It also updates a symlink to the current version for quick access 
via bash alias.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

...llaw eht no rorrim ,rorriM


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-20 Thread Frank Steinmetzger
Am Wed, Apr 19, 2023 at 06:09:15PM -0700 schrieb Mark Knecht:
> > I wonder.  Is there a way to find out the smallest size file in a
> directory or sub directory, largest files, then maybe a average file
> size???  I thought about du but given the number of files I have here, it
> would be a really HUGE list of files.  Could take hours or more too.  This
> is what KDE properties shows.
> 
> I'm sure there are more accurate ways but
> 
> sudo ls -R / | wc

Number of directories (not accounting for symlinks):
find -type d | wc -l

Number of files (not accounting for symlinks):
find -type f | wc -l

> give you the number of lines returned from the ls command. It's not perfect
> as there are blank lines in the ls but it's a start.
> 
> My desktop machine has about 2.2M files.
> 
> Again, there are going to be folks who can tell you how to remove blank
> lines and other cruft but it's a start.

Or not produce them in the first place. ;-)

> Only takes a minute to run on my Ryzen 9 5950X. YMMV.

It’s not a question of the processor, but of the storage device. And if your 
cache, because the second run will probably not use the device at all.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Bosses are like timpani: the more hollow they are, the louder they sound.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-20 Thread Frank Steinmetzger
Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
> Frank Steinmetzger wrote:
> > <<>>
> >
> > When formatting file systems, I usually lower the number of inodes from the 
> > default value to gain storage space. The default is one inode per 16 kB of 
> > FS size, which gives you 60 million inodes per TB. In practice, even one 
> > million per TB would be overkill in a use case like Dale’s media storage.¹ 
> > Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, 
> > not 
> > counting extra control metadata and ext4 redundancies.
> 
> If I ever rearrange my
> drives again and can change the file system, I may reduce the inodes at
> least on the ones I only have large files on.  Still tho, given I use
> LVM and all, maybe that isn't a great idea.  As I add drives with LVM, I
> assume it increases the inodes as well.

I remember from yesterday that the manpage says that inodes are added 
according to the bytes-per-inode value.

> I wonder.  Is there a way to find out the smallest size file in a
> directory or sub directory, largest files, then maybe a average file
> size???

The 20 smallest:
`find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`

The 20 largest: either use tail instead of head or reverse sorting with -r.
You can also first pipe the output of stat into a file so you can sort and 
analyse the list more efficiently, including calculating averages.

> I thought about du but given the number of files I have here,
> it would be a really HUGE list of files.  Could take hours or more too. 

I use a “cache” of text files with file listings of all my external drives. 
This allows me to glance over my entire data storage without having to plug 
in any drive. It uses tree underneath to get the list:

`tree -afx -DFins --dirsfirst --du --timefmt "%Y-%m-%d %T"`

This gives me a list of all directories and files, with their full path, 
date and size information and accumulated directory size in a concise 
format. Add -pug to also include permissions.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Computers are the most congenial product of human laziness to-date.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Frank Steinmetzger
Am Wed, Apr 19, 2023 at 01:00:33PM -0700 schrieb Mark Knecht:


> I think technically they default to the physical block size internally
> and the earlier ones, attempting to be more compatible with HDDs,
> had 4K blocks. Some of the newer chips now have 16K blocks but
> still support 512B Logical Block Addressing.
> 
> All of these devices are essentially small computers. They have internal
> controllers, DRAM caches usually in the 1-2GB sort of range but getting
> larger.

Actually, cheap(er) SSDs don’t have an own DRAM, but rely on the host for 
this. There is an ongoing debate in tech forums whether that is a bad thing 
or not. A RAM cache can help optimise writes by caching many small writes 
and aggregating them into larger blocks.

> The bus speeds they quote is because data is moving for the most
> part in and out of cache in the drive.

Are you talking about the pseudo SLC cache? Because AFAIK the DRAM cache has 
no influence on read performance.

> What I know I'm not sure about is how inodes factor into this.
> 
> For instance:
> 
> mark@science2:~$ ls -i
> 35790149  000_NOT_BACKED_UP
> 33320794  All_Files.txt
> 7840  All_Sizes_2.txt
> 7952  All_Sizes.txt
> 33329818  All_Sorted.txt
> 33306743  ardour_deps_install.sh
> 33309917  ardour_deps_remove.sh
> 33557560  Arena_Chess
> 33423859  Astro_Data
> 33560973  Astronomy
> 33423886  Astro_science
> 33307443 'Backup codes - Login.gov.pdf'
> 33329080  basic-install.sh
> 33558634  bin
> 33561132  biosim4_functions.txt
> 33316157  Boot_Config.txt
> 33560975  Builder
> 8822  CFL_88_F_Bright_Syn.xsc
> 
> If the inodes are on the disk then how are they
> stored? Does a single inode occupy a physical
> block? A 512 byte LBA? Something else?

man mkfs.ext4 says:
[…] the default inode size is 256 bytes for most file systems, except for 
small file systems where the inode size will be 128 bytes. […]

And if a file is small enough, it can actually fit inside the inode itself, 
saving the expense of another FS sector.


When formatting file systems, I usually lower the number of inodes from the 
default value to gain storage space. The default is one inode per 16 kB of 
FS size, which gives you 60 million inodes per TB. In practice, even one 
million per TB would be overkill in a use case like Dale’s media storage.¹ 
Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, not 
counting extra control metadata and ext4 redundancies.

The defaults are set in /etc/mke2fs.conf. It also contains some alternative 
values of bytes-per-inode for certain usage types. The type largefile 
allocates one inode per 1 MB, giving you 1 million inodes per TB of space. 
Since ext4 is much more efficient with inodes than ext3, it is even content 
with 4 MB per inode (type largefile4), giving you 250 k inodes per TB.

For root partitions, I tend to allocate 1 million inodes, maybe some more 
for a full Gentoo-based desktop due to the portage tree’s sheer number of 
small files. My Surface Go’s root (Arch linux, KDE and some texlive) uses 
500 k right now.


¹ Assuming one inode equals one directory or unfragmented file on ext4.
I’m not sure what the allocation size limit for one inode is, but it is 
*very* large. Ext3 had a rather low limit, which is why it was so slow with 
big files. But that was one of the big improvements in ext4’s extended 
inodes, at the cost of double inode size to house the required metadata.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

FINE: Tax for doing wrong.  TAX: Fine for doing fine.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-18 Thread Frank Steinmetzger
Am Wed, Apr 19, 2023 at 12:18:14AM +0200 schrieb Frank Steinmetzger:

> If you use ext4, run `dumpe2fs -h /dev/your-root-partition | grep Lifetime` 
> to see how much data has been written to that partition since you formatted 
> it. Just to get an idea of what you are looking at on your setup.

For comparison:

I’m writing from my Surface Go 1 right now. It’s running Arch linux with KDE 
and I don’t use it very often (meaning, I don’t update it as often as my 
main rig). But updates in Arch linux can be volume-intensive, especially 
because there are frequent kernel updates (I’ve had over 50 since June 2020, 
each accounting for over 300 MB of writes), and other updates of big 
packages if a dependency like python changes. In Gentoo you do revdep-rebuild,
binary distros ship new versions of all affected packages, like libreoffice, 
or Qt, or texlive.

Anyways, the root partition measures 22 G and has a lifetime write of 571 GB 
in almost three years. The home partition (97 GB in size) is at 877 GB. That 
seems actually a lot, because I don’t really do that much high-volume stuff 
there. My media archive with all the photos and music and such sits on a 
separate data partition, which is not synced to the Surface due to its small 
SSD of only 128 GB.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

We shall be landing shortly.
Please return your stewardess to the upright position.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-18 Thread Frank Steinmetzger
Am Tue, Apr 18, 2023 at 10:05:27AM -0500 schrieb Dale:

> Given how I plan to use this drive, that should last a long time.  I'm
> just putting the OS stuff on the drive and I compile on a spinning rust
> drive and use -k to install the built packages on the live system.  That
> should help minimize the writes.

Well, 300 TB over 5 years is 60 TB per year, or 165 GB per day. Every day. 
I’d say don’t worry. Besides: endurance tests showed that SSDs were able to 
withstand multiples of their guaranteed TBW until they actually failed (of 
course there are always exceptions to the rule).

> I read about that bytes written.  With the way you explained it, it
> confirms what I was thinking it meant.  That's a lot of data.  I
> currently have around 100TBs of drives lurking about, either in my rig
> or for backups.  I'd have to write three times that amount of data on
> that little drive.  That's a LOT of data for a 500GB drive. 

If you use ext4, run `dumpe2fs -h /dev/your-root-partition | grep Lifetime` 
to see how much data has been written to that partition since you formatted 
it. Just to get an idea of what you are looking at on your setup.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

What woman is looking for a man who is looking for a woman looking for a man?


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-18 Thread Frank Steinmetzger
Am Tue, Apr 18, 2023 at 09:53:18PM +0100 schrieb Wol:

> On 18/04/2023 21:01, Dale wrote:
> > > I just use tmpfs for /var/tmp/portage (16GB, I'm on 32GB RAM.)

Same.

> /var/tmp/portage on tmpfs. And on every disk I allocate a swap partition
> equal to twice the mobo's max memory. Three drives times 64GB times two is a
> helluva lot of swap.

Uhm … why? The moniker of swap = 2×RAM comes from times when RAM was scarce. 
What do you need so much swap for, especially with 32 GB RAM to begin with?
And if you really do have use cases which cause regular swapping, it’d be 
less painful if you just added some more RAM.

I never used swap, even on my 3 GB laptop 15 years ago, except for extreme 
circumstances for which I specifically activated it (though I never compiled 
huge packages like Firefox or LO myself). These days I run a few zswap 
devices, which act as swap, but technically are compressed RAM disks. So 
when RAM gets full, I get a visible spike in the taskbar’s swap meter before 
the system grinds to a halt.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Night is so dark only so one can see it better.


signature.asc
Description: PGP signature


Re: [gentoo-user] Finally got a SSD drive to put my OS on

2023-04-18 Thread Frank Steinmetzger
Am Mon, Apr 17, 2023 at 10:45:46AM -0700 schrieb Mark Knecht:

> And I don't know that formatting ext4 or some other FS to 16K
> really helps the write amplification issue but it makes sense to
> me to match the file system blocks to the underlying flash
> block size.

The problem is finding out the write block size. This 7-year-old post says 
it’s reached 16 K: https://superuser.com/questions/976257/page-sizes-ssd

So I would say don’t bother. If everything is trimmed, there is no 
amplification. And if the disk becomes full and you get WA when writing 
itsy-bitsy 4 K files, you probably still won’t notice much difference, as 
random 4 K writes are slow anyways and how often do you write thousands of
4 K files outside of portage?

Erase block sizes probably go into the megabytes these days:
https://news.ycombinator.com/item?id=29165202

Some more detailed explanation:
https://spdk.io/doc/ssd_internals.html
  “For each erase block, each bit may be written to (i.e. have its bit 
  flipped from 0 to 1) with bit-granularity once. In order to write to the 
  erase block a second time, the entire block must be erased (i.e. all bits 
  in the block are flipped back to 0).”

This sounds like my initial statement was partially wrong – trimming does 
cause writing zeroes, because that’s what an erase does. But it still 
prevents write amplification (and one extra erase cycle) because 
neighbouring blocks don’t need to be read and written back.

> Real speed testing would be required to ensure reading
> 16K blocks doesn't slow him down though.

Here are some numbers and a conclusion gathered from a read test:
https://superuser.com/questions/728858/how-to-determine-ssds-nand-erase-block-size

Unless I positively need the speed for high-performance computing, I’d 
rather keep the smaller granularity for more capacity at low file sizes.

A problem is what some call “parts lottery” these days: manufacturers 
promise some performance on the data sheet (“up to xxx”), but not with which 
parts they want to achieve this (types of flash chips, TLC/QLC, controller, 
DRAM and so on). Meaning during the lifetime of a product, its internals may 
change and as a consequence those specs are not in the data sheet:

https://unix.stackexchange.com/questions/334804/is-there-a-way-to-find-out-ssd-page-size-on-linux-unix-what-is-physical-block
  “There is no standard way for a SSD to report its page size or erase block 
  size. Few if any manufacturers report them in the datasheets. (Because 
  they may change during the lifetime of a SKU, for example because of 
  changing suppliers.)
  For practical use just align all your data structures (partitions, payload 
  of LUKS containers, LVM logical volumes) to 1 or 2 MiB boundaries. It's an 
  SSD after all--it is designed to cope with usual filesystems, such as NTFS 
  (which uses 4 KiB allocation units).”

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

The worst disease is indifference. So what?


signature.asc
Description: PGP signature


Re: [gentoo-user] Can some config files be automatically protected from etc-update?

2023-04-17 Thread Frank Steinmetzger
Am Mon, Apr 17, 2023 at 02:27:53PM -0700 schrieb Mark Knecht:

> ;-) (And shame on you for being 'a few months' behind on your updates) ;-)

It’s my NAS (basically my media library), which only runs every few months 
due to its server hardware’s high power draw.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

My computer waits faster!


signature.asc
Description: PGP signature


Re: [gentoo-user] Can some config files be automatically protected from etc-update?

2023-04-17 Thread Frank Steinmetzger
Am Mon, Apr 17, 2023 at 12:28:01PM -0700 schrieb Mark Knecht:
> On Mon, Apr 17, 2023 at 11:26 AM Walter Dnes  wrote:
> >
> >   Now that the (no)multilib problem in my latest update has been solved,
> > I have a somewhat minor complaint.  Can I get etc-update to skip certain
> > files?  My latest emerge world wanted to "update"...
> >
> > 1) /etc/hosts (1)
> > 2) /etc/inittab (1)
> > 3) /etc/mtab (1)
> > 4) /etc/conf.d/consolefont (1)
> > 5) /etc/conf.d/hwclock (1)
> > 6) /etc/default/grub (1)
> > 7) /etc/ssh/sshd_config (1)
> >
> > ...hosts is critical for networking.  consolefont allows me tp use the
> > true text console with a readable font, etc, etc.  I have my reasons
> > for making certain settings, and keeping them that way.
> >
> In my experience with all distros I go outside the distro for this
> sort of issue. Put a copy somewhere, white a little script that
> does a diff on the files you feel are important enough and run
> a cron job hourly that looks for any differences.

Isn’t that exactly what etc-update does? IIRC (my last Gentoo update was a 
few months ago), I select one of the files, and it lets me view a diff in 
vim (configurable) of my old version and the new one from the update. Then I 
can either merge the two files right in vim, or elect to keep the new or old 
file entirely.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

“I want to be free!” said the string puppet and cut its strings.


signature.asc
Description: PGP signature


Re: [gentoo-user] Finally got a SSD drive to put my OS on

2023-04-16 Thread Frank Steinmetzger
Am Sun, Apr 16, 2023 at 05:26:15PM -0500 schrieb Dale:

> > > I'm wanting to be able to boot something from the hard drive in the
> > > event the OS itself won't boot.  The other day I had to dig around and
> > > find a bootable USB stick and also found a DVD.  Ended up with the DVD
> > > working best.  I already have memtest on /boot.  Thing is, I very rarely
> > > use it.  ;-)
> >
> > So in the scenario you are suggesting, is grub working, giving you a
> > boot choice screen, and your new Gentoo install is not working so
> > you want to choose Knoppix to repair whatever is wrong with 
> > Gentoo? 
> 
> Given I have a 500GB drive, I got plenty of space.  Heck, a 10GB
> partition each is more than enough for either Knoppix or LiveGUI.  I
> could even store info on there about drive partitions and scripts that I
> use a lot.  Jeez, that's a idea. 

Back in the day, I was annoyed that whenever I needed $LIVE_SYSTEM, I had to 
reformat an entire USB stick for that. In times when you don’t even get 
sticks below 8 GB anymore, I found it a waste of material and useful storage 
space.

And then I found ventoy: https://www.ventoy.net/

It is a mini-Bootloader which you install once to a USB device, kind-of a 
live system of its own. But when booting it, it dynamically scans the 
content of its device and creates a new boot menu from it. So you can put 
many ISOs on one device as simple files, delete them, upgrade them, 
whatever, and then you can select one to boot from. Plus, the rest of the 
stick remains usable as storage, unlike sticks that were dd’ed with an ISO.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The four elements: earth, air and firewater.


signature.asc
Description: PGP signature


Re: [gentoo-user] Finally got a SSD drive to put my OS on

2023-04-16 Thread Frank Steinmetzger
Am Sun, Apr 16, 2023 at 01:22:32PM -0700 schrieb Mark Knecht:
> Frank,
>Thank you for the in-depth explanation.
> 
>I need to do some study before commenting further other than to say
> so far I'm finding different comments depending on whether it's
> an SSD or an M.2 drive.

Uhm, I think you mix up some terms here. An M.2 drive *is* an SSD 
(literally, as the name says, a solid state drive). By “SSD”, did you mean 
the classic laptop form factor for SATA HDDs and SSDs?

Because M.2 is also only a physical form factor. It supports both NVMe and 
SATA. While NVMe is more modern and better suited for solid state media and 
their properties, in the end it is still only a data protocol to transfer 
data to and fro.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

„He who prefers security to freedom deserves to be a slave.“ – Aristotle


signature.asc
Description: PGP signature


Re: [gentoo-user] Finally got a SSD drive to put my OS on

2023-04-16 Thread Frank Steinmetzger
Am Sun, Apr 16, 2023 at 08:08:59AM -0700 schrieb Mark Knecht:

> If you have an SSD or nvme drive installed then fstrim should be
> installed and run on a regular basis. However it's not 'required'.
> 
> Your system will still work, but after all blocks on the drive have
> been used for file storage and later deleted, if they are not
> written back to zeros then the next time you go to use that
> block the write will be slower as the write must first write
> zeros and then your data.
> 
> fstrim does the write to zeros so that during normal operation
> you don't wait.

That is not quite correct. Trimming is about the oppisite of what you say, 
namely to *not* rewrite areas. Flash memory can only be written to in 
relatively large blocks. So if your file system wants to write 4 KiB, the 
drive needs to read all the many kB around it (several hundreds at least, 
perhaps eben MiBs, I’m not certain), change the small part in question and 
write the whole block back. This is called write amplification. This also 
occurs on hard drives, for example when you run a database which uses 4 kiB 
datafile chunks, but on a file system with larger sectors. Then the file 
system is the cause for write amplification.

If the SSD knew beforehand that the area is unused, it does not need to read 
it all in and then write it back. The SSD controller has no knowledge of 
file systems. And this is where trim comes in: it does know file systems, 
detects the unused areas and translates that info for the drive controller. 
Also, only trimmed areas (i.e. areas the controller knows are unused) can be 
used for wear leveling.

I even think that If you read from a trimmed area, the controller does not 
actually read the flash device, but simply returns zeroes. This is basically 
what a quick erase does; it trims the entire drive, which takes only a few 
seconds, and then all the data has become inaccessible (unless you address 
the memory chips directly). It is similar to deleting a file: you erase its 
entry in the directory, but not the actual payload bytes.

AFAIK, SMR HDDs also support trim these days, so they don’t need to do their 
SMR reshuffling. I have a WD Passport Ultra external 2.5″ HDD with 5 TB, and 
it supports trim. However, a WD Elements 2.5″ 4 TB does not. Perhaps because 
it is a cheaper series. Every laptop HDD of 2 (or even 1) TB is SMR.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

“It is hard to be a conquering hero when it is not in your nature.”
  – Captain Hans Geering, ’Allo ’Allo


signature.asc
Description: PGP signature


Re: [gentoo-user] Mouse and hibernate

2023-04-06 Thread Frank Steinmetzger
Am Wed, Apr 05, 2023 at 05:35:52PM +0800 schrieb William Kenworthy:

> > > I have suspend/hibernate set up on a desktop ... it's been working
> > > fine for years. But recently,  it's been occaisionally coming out of
> > > suspension some time after suspension without any intervention on my
> > > part.  I am suspecting the mouse - I would prefer not to disable the
> > > mouse ... Is there an alternative? BillK
> > Often there are options in the BIOS/UEFI to choose what can cause it to
> > come out of suspension.
> > 
> > 
> Unfortunately they are already off (the bios has PS2 settings) - the mouse
> is part of a keyboard/mouse set using a Logitech unifying USB dongle.  I can
> use a udev rule to turn off waking via the USB port, but I cant separate the
> mouse from the keyboard - and I need the keyboard enabled to wake the PC up.

Usually, Logitech mice have a switch on the bottom to physically turn them 
on or off. Usually I use that to circumvent wake-on-USB, rather than pulling 
out the USB wart.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Someone who asks is a fool for five minutes.
Someone who never asks is a fool his entire life.


signature.asc
Description: PGP signature


Re: [gentoo-user] PCIe x1 or PCIe x4 SATA controller card

2023-03-27 Thread Frank Steinmetzger
Am Mon, Mar 27, 2023 at 07:24:47AM -0400 schrieb Rich Freeman:

> > Of course, only you can answer that in the end. Write down what you need and
> > what you care about. Weigh those factors. Then decide. Raw CPU power,
> > electricity bill, heat budget (cooling, noise, dust), the “new and shiny”
> > factor (like DDR5), and price. As I mentioned earlier, the 7xxx-X series are
> > hotheads. But when run with a lower power budget, they are very efficient
> > (which is basically what the non-X do).
> 
> Are they actually hotheads on an energy consumed per unit of work
> basis?  As you say, they're efficient.  If the CPU has 2x the power
> draw, but does 2.5x as much work in a unit of time than the "cooler"
> CPU you're comparing it to, then actually doing any job is going to
> consume less electricity and produce less heat - it is just doing it
> faster.
> […] 
> A recent trend is upping the power draw of CPUs/GPUs to increase their
> throughput, but as long as efficiency remains the same, it creates
> some thermal headaches, but doesn't actually make the systems use more
> energy for a given amount of work.  Of course if you throw more work
> at them then they use more energy.

Back in the day, CPUs were sold to run at an optimum work point, meaning a 
compromise between silicon wafer yield, power consumption and performance. 
Some of the chips were so good, they had the potential for overclocking, 
meaning they are stable enough to be clocked higher and to handle the heat. 
(But at no guarantee from the manufacturer, I presume. So if you grill it, 
it’s your loss.) And heat there was: you could increase a CPU from 4 GHz to 
4.4 GHz (10 % increase), but at a lot more power draw than just 10 %. The 
performance curve flattens at the high end; processing power does not scale 
linearly with power consumption beyond a certain point (else we would do it 
already).

These days, modern high-end CPUs seem to come over-clocked from the factory. 
Instead, if the user wants to run at a more efficient mode, the BIOS offers 
ways to tune down the power budget. You lose 10..20 % in performance, but 
gain 20 K in cooling and 30 % or more in power consumption.

10 years ago, when the very efficient Core architecture swept the market, 
the high-end “extreme” Haswell models drew 140 W. [0] Comare that to current 
generations [1] (Intel) or [2] (AMD), those go beyond 200 W. Of course they 
are much much faster, but average-Joe doesn’t need that.

Looking at concrete examples, the Ryzen 7900 has 3.7 GHz sustained max 
frequency (meaning no thermal throttling) at 65 W. The 7900X has 4.7 GHz (a 
quarter more) and 200 MHz more boost frequency, but is rated at 2½ times the 
wattage. The TDP does not tell you how much power the chip takes at most 
anymore (it can actually take much more in bursts or when it is still cool), 
but for how much thermal energy the cooling system needs to be designed in 
order to keep up the maximum (non-turbo, I think) frequency under load. This 
means that for a short time or on a low number of cores, the non-X can 
sustain almost as much boost clock as the X (it is the same silicon, after 
all), but once the cooling can’t keep up, it will throttle.

I’m not very good at explaining the math or providing hard numbers from 
memory, because all I know about this matter is from reading the occasional 
review. So please have a read yourself (see below). Another reason to take 
my word with a grain of salt: I am biased towards environmentally friendly 
choices. Power may still be cheap where you live, but every kWh produced has 
an impact on the globe.


Power efficiency (“points per Watt” metric):
https://www.techpowerup.com/review/amd-ryzen-9-7900x/24.html
Ryzen 7 5700G (i.e. laptop APU): 240.7 points
Ryzen 7 5700X: 84.5
Ryzen 7 7700X: 83.0
Ryzen 9 7900X: 47.2 at stock (meaning no down-scaling)

A comparison at https://www.xda-developers.com/amd-ryzen-9-7900-review/ 
shows only around 10 % more performance for the 7900X vs. the 7900:
  “The Ryzen 9 7900 is essentially the 7900X without PBO enabled, but it 
  would be a waste to spend more money on essentially the same chip to then 
  underclock it for better thermal performance. It's a better value choice 
  to pick up the Ryzen 9 7900 and then boost up to 7900X-level performance 
  through a simple BIOS toggle. After this has been carried out, performance 
  is pretty much identical.”

Some more reading fodder:
https://www.anandtech.com/show/18693/the-amd-ryzen-9-7900-ryzen-7-7700-and-ryzen-5-5-7600-review-ryzen-7000-at-65-w-zen-4-efficiency


[0] https://en.wikipedia.org/wiki/Haswell_(microarchitecture)
[1] https://en.wikipedia.org/wiki/Alder_Lake
[2] https://en.wikipedia.org/wiki/Zen_4

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

“If privacy is outlawed, only outlaws will have privacy.” – Phil Zimmermann


signature.asc
Description: PGP signature


Re: [gentoo-user] PCIe x1 or PCIe x4 SATA controller card

2023-03-27 Thread Frank Steinmetzger
Am Sun, Mar 26, 2023 at 07:18:09PM -0500 schrieb Dale:

> I used to use the bogomips number as a rough guide.  Thing is, the new
> CPU has a lower bogomips number than my current CPU does.  That doesn't
> seem right.

Bogomips seems to be vry simple, because it takes the current frequency 
into account. So the number will be low when your PC idles and very high 
when you compile something. The “bogo” stands for bogus for a reason.

From Wikipedia [https://en.wikipedia.org/wiki/BogoMips]:
“It is not usable for performance comparisons among different CPUs.”

> So, I guess that number no longer means much.  So, I went
> digging on the site you linked to.  I found this but not sure what to
> make of it. 


> https://openbenchmarking.org/vs/Processor/AMD+Ryzen+9+5900X+12-Core,AMD+FX-8350+Eight-Core
> 
> Some tests, my CPU is faster.  Most, the new one is faster.

Your CPU is not faster at any of them. Look at the label at the top of each 
graph; for some tests, lower is better (as in “time taken for a task”).

For instance, the GnuPG test for encrypting a 2 GB file takes 11.6 seconds 
on the Ryzen, and 19.4 on your CPU. The test is single-threaded, so for this 
kind of task, you can expect around a ⅔ increase in performance per core (or 
rather, thread). OTOH, the m-queens 1.2 test is multi-threaded and you get 
39 s vs 238 s, meaning over 5 times more performance. Probably at lower 
electricity draw.

> I'm trying to figure if I'd be better in the
> long run to buy that expensive CPU or pick one of the cheaper ones you
> mentioned.  I started off with a 4 core on current rig and went to 8
> core and slightly higher frequency.  Money wise it was pretty painless. 
> I could do that again with new rig.

Of course, only you can answer that in the end. Write down what you need and 
what you care about. Weigh those factors. Then decide. Raw CPU power, 
electricity bill, heat budget (cooling, noise, dust), the “new and shiny” 
factor (like DDR5), and price. As I mentioned earlier, the 7xxx-X series are 
hotheads. But when run with a lower power budget, they are very efficient 
(which is basically what the non-X do).

Compiles will speed up no matter what CPU you choose. But where else do you 
need compute power? Video transcodes can be done in the background, and 
there is also a limit to what parallelisation can achieve. Encryption is 
also a non-issue for you. Even my 10 year old i3 in the NAS can encrypt over 
1 GB per second, IIRC.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Dying is the most common cause of death.


signature.asc
Description: PGP signature


  1   2   3   4   5   6   7   >