RE: [gentoo-user] LiveGUI USB Image

2024-01-04 Thread Laurence Perkins



>-Original Message-
>From: Peter Humphrey  
>Sent: Thursday, January 4, 2024 5:01 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] LiveGUI USB Image
>
>On Thursday, 4 January 2024 02:48:13 GMT Adam Carter wrote:
>> > > dd if=/path/to/iso-image of=/dev/sd? bs=4M status=progress
>> > > 
>> > > Replace the obvious bits.
>> > 
>> > I've tried a few values of block size over the years, but so far I 
>> > haven't noticed any difference. I haven't run any proper tests though.
>> 
>> I think it's just that the default blocksize is (or was) very small 
>> (512
>> bytes?) so setting it to anything non-small helps a lot.
>> 
>> eg one example (from
>> https://superuser.com/questions/234199/good-block-size-for-disk-clonin
>> g-with
>> -diskdump-dd#234204) seems to show that most gains are in by around 16k.
>> There's probably a lot of testing noise in these results.
>> 
>> $ ./dd_obs_test.sh
>> block size : transfer rate
>>512 : 11.3 MB/s
>>   1024 : 22.1 MB/s
>>   2048 : 42.3 MB/s
>>   4096 : 75.2 MB/s
>>   8192 : 90.7 MB/s
>>  16384 : 101 MB/s
>>  32768 : 104 MB/s
>>  65536 : 108 MB/s
>> 131072 : 113 MB/s
>> 262144 : 112 MB/s
>> 524288 : 133 MB/s
>>1048576 : 125 MB/s
>>2097152 : 113 MB/s
>>4194304 : 106 MB/s
>>8388608 : 107 MB/s
>>   16777216 : 110 MB/s
>>   33554432 : 119 MB/s
>>   67108864 : 134 MB/s
>
>Interesting. I think I'll stick to my usual 64MB block size.
>
>--
>Regards,
>Peter.

So, basically the default is 512, and if you're copying to something where the 
hardware block size is larger than that (4096 is common on a lot of things 
these days) then every block on the physical device has to be rewritten 
multiple times.  (Eight times in the example case.)

Needless to say, that's... rather slow.  Also rather hard on your device.

Once you get larger than the physical block size, as long as you have an even 
multiple of the physical block size, then any additional gains come solely from 
the IO scheduler being able to make slightly more intelligent choices with 
larger chunks.

LMP



RE: [gentoo-user] Re: Abnormal processor temperature.

2023-11-27 Thread Laurence Perkins



> -Original Message-
> From: Grant Edwards  
> Sent: Wednesday, November 22, 2023 8:36 AM
> To: gentoo-user@lists.gentoo.org
> Subject: [gentoo-user] Re: Abnormal processor temperature.
> 
> On 2023-11-21, Laurence Perkins  wrote:
> 
> > I have a system here running an Intel N97 processor, which is idling 
> > at 70-80C on Gentoo with all cores 99% idle.  This is 40 degrees 
> > hotter than it runs on Ubuntu or Windows 10.
> >
> > Powertop confirms that the CPU is spending nearly all of its time in 
> > idle mode.
> 
> Are clock speeds being scaled down when idle?  Or does the N97's "idle mode" 
> preclude the need to scale down clock speed when not busy to avoid high temps?
> 
> 

That was part of my confusion, because even when I used cpupower to lock all 
cores to 800MHz, it kept running hot.

Michael's suggestion of adding the SPI modules to the kernel appears to have 
fixed it.  I'm not sure why the gentoo-kernel-bin that I tried didn't work at 
that rate, but I'm going to have to activate the "somebody-else's-problem" 
field on that one.

LMP




RE: [gentoo-user] Abnormal processor temperature.

2023-11-21 Thread Laurence Perkins


> -Original Message-
> From: Michael  
> Sent: Tuesday, November 21, 2023 4:15 PM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Abnormal processor temperature.
> 
> On Tuesday, 21 November 2023 22:12:28 GMT Laurence Perkins wrote:
> > I have a system here running an Intel N97 processor, which is idling 
> > at 70-80C on Gentoo with all cores 99% idle.  This is 40 degrees 
> > hotter than it runs on Ubuntu or Windows 10.
> > 
> > Powertop confirms that the CPU is spending nearly all of its time in 
> > idle mode.  I have tried both the 6.1 and the 6.5 kernels, including 
> > the pre-compiled gentoo-kernel-bin version just to be sure it's not 
> > something in *my* config messing it up.
> > 
> > I'm not really sure where to look next.  There doesn't seem to be 
> > anything actually *using* the CPU or GPU to any significant degree.  
> > Certainly not enough to explain the temperature difference.  If anyone 
> > has advice on what diagnostics to try next to find out what's going on I'd 
> > appreciate it.
> > 
> > Thanks,
> > LMP
> 
> The fact you get a report of 40° hotter temperature does not mean the CPU 
> temperature is actually higher.  Do you get any other indications the PC is 
> running hot, e.g. the fans are spinning faster, the heatsink is hotter to the 
> touch, or when measured with an infrared thermometer?
> 
> Is the 'sensors' command output, after you have run sensors-detect, different 
> than Ubuntu's?
> 
> Is the content of /etc/sensors3.conf different between the two distros?
> 
> Are there different sensor modules loaded?
> 
> Some chip drivers report spurious results and need specifying a different 
> measurement method/sensor device.  You can take a look here for you CPU/MoBo
> chip:
> 
> https://www.kernel.org/doc/Documentation/hwmon/
>

Sorry, I forgot to mention that.  The fans definitely run faster.  Which is 
fortunate because checking with an IR thermometer on the CPU heatsink produces 
more varied results than the internal sensor, but still  30-40° hotter.

Driver modules loaded by each distro don't seem substantially different.  
Ubuntu has intel-spi loaded while Gentoo does not, but that's it for 
differences from lspci.  I can look for other differences as well, but it 
doesn't seem to be a difference in sensor calibration unfortunately.

Also of note, this same stage4 OS snapshot runs on other systems without 
anything unusual happening, but this is the only configuration using the N97 
processor.

LMP




[gentoo-user] Abnormal processor temperature.

2023-11-21 Thread Laurence Perkins
I have a system here running an Intel N97 processor, which is idling at 70-80C 
on Gentoo with all cores 99% idle.  This is 40 degrees hotter than it runs on 
Ubuntu or Windows 10.

Powertop confirms that the CPU is spending nearly all of its time in idle mode. 
 I have tried both the 6.1 and the 6.5 kernels, including the pre-compiled 
gentoo-kernel-bin version just to be sure it's not something in *my* config 
messing it up.

I'm not really sure where to look next.  There doesn't seem to be anything 
actually *using* the CPU or GPU to any significant degree.  Certainly not 
enough to explain the temperature difference.  If anyone has advice on what 
diagnostics to try next to find out what's going on I'd appreciate it.

Thanks,
LMP


RE: [gentoo-user] Emerge -K ignoring new packages

2023-11-20 Thread Laurence Perkins


> -Original Message-
> From: Wols Lists  
> Sent: Monday, November 20, 2023 9:46 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Emerge -K ignoring new packages
> 
> On 20/11/2023 17:12, Vitaliy Perekhovy wrote:
> > On Mon, Nov 20, 2023 at 05:07:45PM +, Peter Humphrey wrote:
> >> Hello list,
> >>
> >> Now that I have my NFS set up (with help - thanks) the next problem 
> >> is that, having new packages built by my workstation over NFS, emerge 
> >> on the tiny box is ignoring all those new packages. And yes, I have 
> >> checked that they do exist, and in the right place: /var/cache/packages/ .
> >>
> >> The man page says that any new package will cause a remerge, so what 
> >> has tripped me up this time?
> >>
> >> --
> >> Regards,
> >> Peter.
> > 
> > Default location for binary packages is /var/cache/binpkgs/
> > 
> Can't remember what I did, but the first thing to check is you're using the 
> same make flags (unless of course, you're sharing /etc/portage).
> 
> Then I seem to remember using -bK or something like that. So the command I'm 
> giving emerge is "use a binary if you can find it, otherwise build it".
> 
> Because I might emerge packages on either machine, that worked great for me. 
> And I actually usually emerged stuff on the slower machine, because it was 
> more reliable ... :-)
> 
> Cheers,
> Wol
>

-K is "binpkg or nothing" and will abort if the entire list of packages aren't 
available in binpkg form.
-k will use binpackages if available and build otherwise.

Beware of creating binpackages for anything in the virtual/* category.  No good 
can come of it.

LMP


RE: [gentoo-user] RE: libva-glx.so.2

2023-10-23 Thread Laurence Perkins
> From: Dale rdalek1...@gmail.com<mailto:rdalek1...@gmail.com>
> Sent: Monday, October 23, 2023 12:47 PM
> To: gentoo-user@lists.gentoo.org<mailto:gentoo-user@lists.gentoo.org>
> Subject: Re: [gentoo-user] RE: libva-glx.so.2
>
> Laurence Perkins wrote:
> > From: Laurence Perkins lperk...@openeye.net<mailto:lperk...@openeye.net>
> > Sent: Monday, October 23, 2023 11:05 AM
> > To: gentoo-user@lists.gentoo.org<mailto:gentoo-user@lists.gentoo.org>
> > Subject: [gentoo-user] libva-glx.so.2
> >
> > I have a program with an embedded copy of ffmpeg that is choking on a lack 
> > of libva-glx.so.2.
> >
> > Debian has it in a libva-glx package.
> > Manjaro has it in their general libva package.
> >
> > On Gentoo there is no trace of it...  At least not until I use the 'ebuild' 
> > tool to build the libva package manually and sic 'find' on the compilation 
> > results.
> >
> > Then I find 
> > /var/tmp/portage/media-libs/libva-2.19.0/work/libva-2.19.0/va/.libs/libva-glx.so.2
> >
> > So...  apparently it builds the glx support libraries in a *hidden* folder? 
> >  And I'm guessing then the ebuild's install phase fails to spot it?
> >
> > I'm thinking this is probably worthy of a bug report, but I want to make 
> > sure there's not some reason why these libraries are being left out on 
> > purpose first.  Other distros have them, and I see at least one bug report 
> > about Steam now requiring it, so it probably needs sorting out somehow...
> >
> >
> > LMP
>
> Bit of a followup.  Poking around a bit more I find "-Dwith_glx=no" embedded 
> in the ebuild.  So apparently somebody back in the day decided it wasn't even 
> worthy of a USE flag.  I can tweak my own locally for now and will put a 
> request for it on the bug tracker.
>
> LMP
>
>
> If you not familiar with this site, you may want to check it out.  It comes 
> in handy when trying to find what file belongs to what package when you don't 
> have it yet.
>
> https://www.portagefilelist.de/index.php
>
> I found this:
>
> https://www.portagefilelist.de/index.php?fs=*libva*glx*#panchor
>
> In case that link doesn't work, I searched for this.  *libva*glx*  It shows 
> the following, for others who don't want to search.
>
>
> FilenameFilepath  
>Category Package Version Arch
> libva-glx.so.1.4000.0 /usr/lib64/libva-glx.so.1.4000.0 
> media-libs media-libs/libva-compat 1.8.3-r2 amd64
> libva-glx.so.1 /usr/lib64/libva-glx.so.1  
>media-libs media-libs/libva-compat 1.8.3-r2 amd64
> libva-glx.so.1.4000.0.debug  
> /usr/lib/debug/usr/lib64/libva-glx.so.1.4000.0.debug media-libs 
> media-libs/libva-compat 1.8.3-r2 amd64
>
>
> If that is the exact ones you are looking for, that's where they come from.
>
> Hope that helps.
>
> Dale
>
> :-)  :-)
>

Well that's interesting...  I have PFL installed on my systems so I can use the 
"e-file" command to search that database, but I was searching specifically for 
the libva-2.x version of it and so found nothing.

Meanwhile, further digging has found 
https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=e9eaf08653a2ada19b94c9807a6b85008a125b3c

So apparently they turned it off back in April, but left it on in the 
libva-compat package since that doesn't cause circular dependency issues.

Problem is that everyone else still uses it, so if you're running stuff that 
wasn't compiled on Gentoo it might not work as well as you might like.

I'm going to ask nicely and see if we can have a USE flag for it.  I'd really 
rather not maintain a fork of the ebuild for something so trivial.

LMP


[gentoo-user] RE: libva-glx.so.2

2023-10-23 Thread Laurence Perkins
> From: Laurence Perkins lperk...@openeye.net<mailto:lperk...@openeye.net>
> Sent: Monday, October 23, 2023 11:05 AM
> To: gentoo-user@lists.gentoo.org<mailto:gentoo-user@lists.gentoo.org>
> Subject: [gentoo-user] libva-glx.so.2
>
> I have a program with an embedded copy of ffmpeg that is choking on a lack of 
> libva-glx.so.2.
>
> Debian has it in a libva-glx package.
> Manjaro has it in their general libva package.
>
> On Gentoo there is no trace of it...  At least not until I use the 'ebuild' 
> tool to build the libva package manually and sic 'find' on the compilation 
> results.
>
> Then I find 
> /var/tmp/portage/media-libs/libva-2.19.0/work/libva-2.19.0/va/.libs/libva-glx.so.2
>
> So...  apparently it builds the glx support libraries in a *hidden* folder?  
> And I'm guessing then the ebuild's install phase fails to spot it?
>
> I'm thinking this is probably worthy of a bug report, but I want to make sure 
> there's not some reason why these libraries are being left out on purpose 
> first.  Other distros have them, and I see at least one bug report about 
> Steam now requiring it, so it probably needs sorting out somehow...
>
>
> LMP

Bit of a followup.  Poking around a bit more I find "-Dwith_glx=no" embedded in 
the ebuild.  So apparently somebody back in the day decided it wasn't even 
worthy of a USE flag.  I can tweak my own locally for now and will put a 
request for it on the bug tracker.

LMP


[gentoo-user] libva-glx.so.2

2023-10-23 Thread Laurence Perkins
I have a program with an embedded copy of ffmpeg that is choking on a lack of 
libva-glx.so.2.

Debian has it in a libva-glx package.
Manjaro has it in their general libva package.

On Gentoo there is no trace of it...  At least not until I use the 'ebuild' 
tool to build the libva package manually and sic 'find' on the compilation 
results.

Then I find 
/var/tmp/portage/media-libs/libva-2.19.0/work/libva-2.19.0/va/.libs/libva-glx.so.2

So...  apparently it builds the glx support libraries in a *hidden* folder?  
And I'm guessing then the ebuild's install phase fails to spot it?

I'm thinking this is probably worthy of a bug report, but I want to make sure 
there's not some reason why these libraries are being left out on purpose 
first.  Other distros have them, and I see at least one bug report about Steam 
now requiring it, so it probably needs sorting out somehow...


LMP


RE: [gentoo-user] Re: OFF TOPIC Need Ubuntu network help: boot loader info

2023-10-19 Thread Laurence Perkins
>
>That config kinda reminds me of the old grub.  A title line, location of 
>kernel and then options.  Sounds easy enough.  The new grub config is almost 
>impossible to config by hand.  They had to make a tool to do it. That says a 
>lot there.  ;-) 


>
>Dale
>

You can still write that kind of config for the new grub if you want.  Load 
what drivers you need, and then a menuentry with title, kernel, initramfs, etc. 
 The rest of the boilerplate is to support dynamically finding the boot drive 
if something changes with your disks and to enable tools like grub-reboot.  
Handy features, and the grub-mkconfig is quicker and easier than editing even a 
simple config by hand.  But if you really want to do it yourself that still 
works.

LMP


RE: [gentoo-user] world updates blocked by Qt

2023-10-11 Thread Laurence Perkins
> -Original Message-
> From: Philip Webb  
> Sent: Wednesday, October 11, 2023 9:44 AM
> To: Gentoo User 
> Subject: [gentoo-user] world updates blocked by Qt
> 
> CAUTION: This is an EXTERNAL email. Do not click links or open attachments 
> unless you recognize the sender and know the content is safe.
> 
> 231011 Alan McKinnon wrote:
> > Today a sync and emerge world produces a huge list of blockers.
> > qt 5.15.10 is currently installed and qt 5.15.11 is new in the tree 
> > and being blocked.
> > All the visible blockers are Qt itself so --verbose-conflicts is needed.
> 
> My experience for some time has been that Qt pkgs block one another, st the 
> only way out is to unmerge them all, then remerge them all.
> If anyone knows a better method, please let us know
> 
If you don't want to do that, then adding all the currently installed qt 
packages to the merge list explicitly works as well.  If you're good with eix's 
search and output settings it can be done without too much agony.

LMP



RE: [gentoo-user] Controlling emerges

2023-09-21 Thread Laurence Perkins


> -Original Message-
> From: Wol  
> Sent: Wednesday, September 20, 2023 3:07 PM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Controlling emerges
> 
> > What you have is not a portage problem. It is a orthodox parallelism 
> > problem, and I think you are thinking your constraint is unique in the 
> > work - it isn't.
> > With parallelism, trying to fiddle single nodes to improve things 
> > overall never really works out.
> > 
> A big problem you are missing is that portage does not have control of the 
> system. It can control its usage of the system, but if I want emerge to use 
> as much SPARE resource IN THE BACKGROUND as it can without impacting on 
> on-line responsiveness, that is HARD.
> 
> I would like to be able to tell portage "these programs are resource hogs, 
> don't parallelise them". If portage has loads of little jobs, it can fire 
> them off one after the other as resource becomes available. If it fires a hog 
> (or worse, two) off at the same time, the system can rapidly collapse under 
> load.
> 
> Even better, if portage knew roughly how much resource each job required, it 
> could (within constraints) start with the jobs that required least resource 
> and run loads of them, and by firing jobs off in order of increasing 
> demandingness, the number of jobs running in parallel would naturally tail 
> off.
> 
> At the end of the day, if the computer takes an extra 20% time, I'm not 
> bothered. If I'm sat at the computer 20% time extra because the system isn't 
> responding because emerge has bogged it down, then I do care. And when I'm 
> building things like webkit-gtk, llvm, LO, FF and TB, they do hammer my 
> system. If they're running in parallel, my system would be near unusable.
> 
> Cheers,
> Wol

Maybe take a look at "cpulimit" out of the repos.  I used to use it on one of 
my low-power systems to control how much load the various compilers were 
allowed to put on the system so that it could keep doing other tasks.

I think there are some other, similar tools as well.

LMP


RE: [gentoo-user] Controlling emerges

2023-09-18 Thread Laurence Perkins


> From: Alan McKinnon alan.mckin...@gmail.com
> Sent: Monday, September 18, 2023 9:13 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Controlling emerges
>
>
>
> On Mon, Sep 18, 2023 at 6:03 PM Peter Humphrey 
> pe...@prh.myzen.co.uk wrote:
> On Monday, 18 September 2023 14:48:46 BST Alan McKinnon wrote:
> > On Mon, Sep 18, 2023 at 3:44 PM Peter Humphrey 
> > pe...@prh.myzen.co.uk
> >
> > wrote:
> > > It may be less complex than you think, Jack. I envisage a package being
> > > marked
> > > as solitary, and when portage reaches that package, it waits until all
> > > current
> > > jobs have finished, then it starts the solitary package with the
> > > environment
> > > specified for it, and it doesn't start the next one until that one has
> > > finished.
> > > The dependency calculation shouldn't need to be changed.
> > >
> > > It seems simple the way I see it.
> >
> > How does that improve emerge performance overall?
>
> By allocating all the system resources to huge packages while not flooding the
> system with lesser ones. For example, I can set -j20 for webkit-gtk today
> without overflowing the 64GB RAM, and still have 4 CPU threads available to
> other tasks. The change I've proposed should make the whole operation more
> efficient overall and take less time.
>
> As things stand today, I have to make do with -j12 or so, wasting time and
> resources. I have load-average set at 32, so if I were to set -j20 generally
> I'd run out of RAM in no time. I've had many instances of packages failing to
> compile in a large update, but going just fine on their own; and I've had
> mysterious operational errors resulting, I suspect, from otherwise undetected
> miscompilation.
>
> Previous threads have more detail of what I've tried already.
>
> I did read all those but no matter how you move things around you still have 
> only X resources available all the time.
> Whether you just let emerge do it's thing or try get it to do big packages on 
> their own, everything is still going to use the same number of cpu cycles 
> overall and you will save nothing.
>
> If webkit-gtk is the only big package, have you considered:
>
> emerge -1v webkit-gtk && emerge -avuND @world?
>
>
> What you have is not a portage problem. It is a orthodox parallelism problem, 
> and I think you are thinking your constraint is unique in the work - it isn't.
> With parallelism, trying to fiddle single nodes to improve things overall 
> never really works out.
>
> Just my $0.02
>
>
> Alan
>
> --
> Alan McKinnon
> alan dot mckinnon at gmail dot com
>

Note that on my systems I just make heavy use of the various load-average 
limiting options and as long as two of the big packages don't start within 
seconds of each other it does a pretty good job of letting them run by 
themselves.

If things do get in a snarl, you can always use kill -18/19 to suspend a few 
compile jobs until the system stops thrashing and resume them as capacity 
permits.

LMP


RE: [gentoo-user] Is distfile partial mirror with failover possible?

2023-09-05 Thread Laurence Perkins
>
>
> From: Alan McKinnon alan.mckin...@gmail.com
> Sent: Monday, September 4, 2023 12:38 PM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Is distfile partial mirror with failover possible?
>
> BTW Welcome back Alan, but leave your dirty top-posting in Archland :P
>
> Eh, I use gmail in the browser . the blerry thing is built to top post, 
> like Outlook
>
>
> Alan
>

I'm in the same boat, but Notepad++ on Windows and Vim on Linux both support 
column-mode editing, so a couple quick cut-n-pastes and I can preserve the 
illusion that I'm using a sensible client.

LMP


RE: [gentoo-user] VPN newbie questions

2023-08-21 Thread Laurence Perkins



> -Original Message-
> From: Walter Dnes  
> Sent: Saturday, August 19, 2023 7:34 PM
> To: Gentoo Users List 
> Subject: [gentoo-user] VPN newbie questions
> 
> CAUTION: This is an EXTERNAL email. Do not click links or open attachments 
> unless you recognize the sender and know the content is safe.
> 
>   I've been on Gentoo for years and years, but I've never used a VPN, so 
> consider me an absolute newbie.  Canadian big news media has successfully 
> lobbied our government to implement a link tax.  Google has decided to avoid 
> the tax by not linking to it in Google search.  This morning I tried to find 
> some news about the Russian invasion from my desktop PC, and the results were 
> brutal.  No links from any Canadian sources (as expected) or US (CNN) or UK 
> (BBC).  Talk about draconian.  I need a VPN to make me "self-identify" as 
> being in the USA, or wherever, so that Google doesn't censor news on me.
> 
>   Many commercial VPNs claim to support linux.  Do they do this at the OS 
> level as an executable, or at the browser level as an extension?
> Extension would be bad for me, because Chrome and Firefox extensions don't 
> work on Pale Moon (my "daily driver").  I notice the existance of a "net-vpn" 
> category in the repo, too.  What works for people here, and how well?  What 
> are your experiences?
> 
> --
> I've seen things, you people wouldn't believe; Gopher, Netscape with frames, 
> the first Browser Wars.  Searching for pages with AltaVista, pop-up windows 
> self-replicating, trying to uninstall RealPlayer.  All those moments, will be 
> lost in time like tears in rain... time to die.
> 
> 

If you want something in the tree and free, TOR can be told to expose a control 
port which you can then use to tell it how many hops you want and what exit 
nodes to use.  One-hop tunnels are not particularly great for disguising 
yourself, but that's not your goal in this case.

At least...  Not until they decide to crack down on link-tax evasion.

There used to be a GUI frontend for it called "Vidalia".  But it hasn't been 
updated in a long time.  Not sure if there's a replacement.

LMP



RE: [gentoo-user] Email clients

2023-07-31 Thread Laurence Perkins

> Jul 31, 2023 13:23:21 Matt Connell :
> 
> > On Mon, 2023-07-31 at 20:16 +0300, Alexe Stefan wrote:
> >>> Normally I would be in the chorus of "why do I need a whole entire 
> >>> web engine for an email client" but I'm also in the group of people 
> >>> who knows full well what the answer is.
> >> 
> >> What is the answer?
> >> Mutt doesn't need a web engine.
> > 
> > For the reason that you just demonstrated for the class: HTML emails.
> > 
> > Now, your simple mail shows just fine in a plain text only mail 
> > client, but in my world, and I'd wager most people's world, handling 
> > HTML messages (which includes CSS for legibility) is a necessity to 
> > some varying degree.
> > 
> > Don't get me wrong, I'm "team plaintext" all day every day but I'm not 
> > going to make my life more difficult on principles.  There are hills 
> > worth dying on but this isn't mine.
> Iirc, you can setup mutt to open html emails either in a web browser or with 
> something like w3m. There's no need for a web engine in a mail client when 
> you have a perfectly workable web engine in the browser. You can easily reply 
> to html mail in plain text either way, and most html mail are marketing or 
> newsletter emails from companies where replying isn't needed anyways.
> 

That is totally not true any more unfortunately.  The vast majority of email 
clients and web interfaces used by the technopeasants send HTML mail by 
default.  So unless you're one of the lucky few who exchange emails only with 
fellow hackers you can expect to have to deal with a lot of HTML mail.

Still, you don't technically need to have the HTML engine *in* the client 
itself...  But that does make opening the silly things a bit quicker.

LMP


RE: [gentoo-user] Simple installation on BTRFS

2023-07-31 Thread Laurence Perkins



>-Original Message-
>From: Neil Bothwick  
>Sent: Saturday, July 29, 2023 5:43 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] Simple installation on BTRFS
>
>On Sat, 29 Jul 2023 10:47:53 +0100, Michael wrote:
>> I doubt I will need anything so frequent, these days my data does not 
>> change often enough.  Daily snapshots should do the trick and I could 
>> keep more of them.
>
>Snapshots don't take up any space if the data does not change, so frequent 
>snapshots don't hurt. The biggest impact on snapshot sizes on a Gentoo system 
>is kernel source package updates.
>
>
For informational purposes, I once fat-fingered my cron job that takes hourly 
snapshots of my workspace during work hours and was taking minutely snapshots 
instead.

I didn't notice any adverse effects until I had well over 60,000 snapshots.

The main adverse effect at that point was that doing anything with snapshots 
started to get...  sluggish.

The second adverse effect was that listing the content of the snapshots 
directory took several minutes, and I had to write a script to delete them 
because it was too big for my kernel's process command line length limit.  (I 
also found out how to raise that limit if anyone wants to know, but decided it 
wasn't worth the effort.)

It all cleaned up fine in a few hours though.

LMP



RE: [gentoo-user] Simple installation on BTRFS

2023-07-28 Thread Laurence Perkins

>If you can run two disks and raid, that's always a good idea. SMART is 
>supposed to catch disk problems, but they still do die without warning.
>
>btrfs raid is (still) full of gotchas, as far as I know.
>
>Don't use anything higher than raid-1. Parity raid isn't reliable last I knew 
>...

It's still not *officially* recognized as safe, but I have seen a number of 
people testing various ways of hitting it over the head that used to break it 
without losing their data.
Big thing is that in the case of power failure you really *need* to run a scrub 
just in case there was an incomplete write.  One will be fine, but if you allow 
them to stack up you will *eventually* start losing things.


>> Your favoured snapshot/backup strategy?
>
>Manual ... probably shouldn't be. I snapshot / every friday before I do an 
>emerge on Saturday. /home I ought to snapshot more than I do.

Note that you can use the snapshot to build your updates as binpackages and 
test out how they work before touching your main system.  Saves on service 
downtime and doesn't risk breaking your main system if something goes wrong.  
Does maybe mean doing the upgrade twice unless you just pivot to the snapshot 
once it's ready.

>
>WATCH YOUR FREE DISK. I think it's all sorted now, but whatever you're using 
>it was always a good idea not to go over 90% full. For a very long time, a 
>combination of snapshots and a full disk would wedgie the system, such that 
>the only way to free up space was to reformat the entire disk! As I say, I 
>think it's now fixed so you can delete snapshots, but >90% ain't a good idea 
>anyway

Mostly sorted.  You can still get it wedged if you run the disk all the way up 
to full, but you can, at least, add another volume to give yourself workspace 
to straighten it out.  And it only seems to happen if you're using 
disparate-sized disks in a RAID, which throws off its free space calculations.

LMP


RE: [gentoo-user] Instrumenting emerges

2023-07-14 Thread Laurence Perkins
And if you want more than just the load average, atop keeps a log as well as 
being a top-style monitor program.

LMP

-Original Message-
From: Frank Steinmetzger  
Sent: Thursday, July 13, 2023 4:02 PM
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] Instrumenting emerges

Am Sat, Jul 08, 2023 at 12:42:13PM -0300 schrieb David M. Fellows:

> while [ true ] ; do cat /proc/loadavg |logger; sleep 60; done

A spec more elegant:

while sleep 60; do ... ; done

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Taglines are like cars - You get a good one, then someone nicks it.


RE: [gentoo-user] Something's actively deleting libssl.so.1.1 on my system!

2023-06-21 Thread Laurence Perkins


>-Original Message-
>From: Alan Grimes  
>Sent: Tuesday, June 20, 2023 11:01 PM
>To: Gentoo User 
>Subject: [gentoo-user] Something's actively deleting libssl.so.1.1 on my 
>system!
>
>CAUTION: This is an EXTERNAL email. Do not click links or open attachments 
>unless you recognize the sender and know the content is safe.
>
>Having manually dropped a stage3 libssl into my system, it has disappeared 
>again!! What in god's name is going on? have I been virused
>
>
>--
>Beware of Zombies. =O
>#EggCrisis  #BlackWinter
>White is the new Kulak.
>Powers are not rights.

You could always chattr +i it and see what blows up...

LMP


RE: [gentoo-user] Re: Can't upgrade portage or update/install ebuilds

2023-06-13 Thread Laurence Perkins


>From: Mitch D. futurehyp...@gmail.com
>Sent: Tuesday, June 13, 2023 9:36 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] Re: Can't upgrade portage or update/install ebuilds
>
>On Tue, Jun 13, 2023 at 10:38 AM Grant Edwards 
>grant.b.edwa...@gmail.com wrote:
>On 2023-06-12, Wol antli...@youngman.org.uk 
>wrote:
>> On 09/06/2023 21:16, Grant Edwards wrote:
>>> On 2023-06-09, Daniel Pielmeier bil...@gentoo.org 
>>> wrote:
>>>
 If it is only about gemato then temporary disable the rsync-verify flag
 which pulls it in.

 # USE="-rsync-verify" emerge sys-apps/portage
>>>
>>> The problem I ran into is that you never know how many issues there
>>> are standing in the way of upgrading. The one time I decided to muscle
>>> my way through updating an "obsolete" Gentoo install, [...]
>>>
>>> You do learn alot about how portage/emerge works...
>>>
>> Learning that is a good idea maybe :-)
>>
>> But last time I had a well-out-of-date system, it was a long and
>> messy process ...
>>
>> What I did was, every time portage said "giving up" or "conflict found"
>> or whatever, I just took a note of as many of the packages I could
>> remember that portage said it could emerge, and then manually updated
>> them "emerge --update --one-shot".
>>
>> And any conflicts, if I dared, I simply deleted then "emerge -C --one-shot".
>
>IIRC, at one point Python was one of those problems, and I stupidly
>removed Python before realizing what that meant...
>
>Hilarity ensued.
>
>Removing/skipping as many of the non-essential "big" packages and
>their dependancies and getting the base system updated is indeed the
>best way to go.
>
>I second this approach. When rescuing a Gentoo system, my first step would be 
>to deselect any and every non-critical package from @world, then try to get 
>@system updated through any means necessary. In the past, I've removed 
>packages instead of deselecting them, but I've had cases where depclean 
>refused to do anything because there were already dependency problems, and 
>sometimes it's hard to know what's safe to unmerge with "-C".
>
I have noticed that doing a --unmerge on virtual/* clears away whole sections 
of conflicts in a lot of cases.

Doing the same on dev-perl/* is a decent trick too if it's snarled enough that 
perl-cleaner runs into conflicts.  But sometimes perl dependencies aren't 
correctly spelled out, so you may have to reinstall some of it by hand in some 
cases.

And you'd be surprised how many “hard” dependency version requirements are 
“softer” than expected.  Using the "ebuild" tool to force it to "just do what 
it's told" and install the new version, and then "emerge -e @world" at the end 
of it all to clean up any mess uses a lot of machine time, but it can save a 
lot of human time.

LMP


[gentoo-user] Strange swap behaviour.

2023-03-21 Thread Laurence Perkins
I have a couple systems here which are swapping out constantly when the system 
is under load.  This is almost certainly cache pressure since "under load" 
means "streaming large amounts of data to disk".

The odd thing is they almost never swap back in.  The swap device is getting 
about 2MB/s worth of writes, and yet there's never more than a few kilobytes of 
swap in use.

Examining it with atop, none of the processes on the system show any swap in 
use at all.

Setting swappiness to 0 doesn't slow the rate down at all either, which is also 
interesting.

I'm hoping somebody has some clues about how to figure out what's being swapped 
out and why it's being immediately deallocated/overwritten.  Because it seems 
like this is just wear and tear with no useful purpose being served and I'd 
kind of like to make it stop, but would rather not do that by disabling swap 
entirely.

LMP




RE: [gentoo-user] Jobs and load-average

2023-02-16 Thread Laurence Perkins


> -Original Message-
> From: Rich Freeman  
> Sent: Thursday, February 16, 2023 6:24 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Jobs and load-average
> 
> On Thu, Feb 16, 2023 at 8:39 AM Peter Humphrey  wrote:
> >
> > I've just looked at 'man make', from which it's clear that -j = 
> > --jobs, and that both those and --load-average are passed to 
> > /usr/bin/make, presumably untouched unless portage itself has 
> > identically named variables. So I wonder how feasible it might be for 
> > make to incorporate its own checks to ensure that the load average is 
> > not exceeded. I am not a programmer (not for at least 35 years, anyway), so 
> > I have to leave any such suggestion to the experts.
> >
> 
> Well, if we just want to have a fun discussion here are my thoughts.
> However, the complexity vs usefulness outside of Gentoo is such that I don't 
> see it happening.
> 
> For the most typical use case - a developer building the same thing over and 
> over (which isn't Gentoo), then make could cache info on resources consumed, 
> and use that to make more educated decisions about how many tasks to launch.  
> That wouldn't help us at all, but it would help the typical make user.  
> However, the typical make user can just tune things in other ways.
> 
> It isn't going to be possible for make to estimate build complexity in any 
> practical way.  Halting problem aside maybe you could build in some smarts 
> looking at the program being executed and its arguments, but it would be a 
> big mess.
> 
> Something make could do is tune the damping a bit.  It could gradually 
> increase the number of jobs it runs and watch the load average, and gradually 
> scale it up appropriately, and gradually scale down if CPU is the issue, or 
> rapidly scale down if swap is the issue.  If swapping is detected it could 
> even suspend most of the tasks it has spawned and then gradually continue 
> them as other tasks finish to recover from this condition.  However, this 
> isn't going to work as well if portage is itself spawning parallel instances 
> of make - they'd have to talk to each other or portage would somehow need to 
> supervise things.
> 
> A way of thinking about it is that when you have portage spawning multiple 
> instances of make, that is a bit like adding gain to the --load-average 
> MAKEOPTS.  So each instance of make independently looks at load average and 
> takes action.  So you have an output (compilers that create load), then you 
> sample that load with a time-weighted average, and then you apply gain to 
> this average, and then use that as feedback.  That's basically a recipe for 
> out of control oscillation.
> You need to add damping and get rid of the gain.
> 
> Disclaimer: I'm not an engineer and I suspect a real engineer would be able 
> to add a bit more insight.
> 
> Really though the issue is that this is the sort of thing that only impacts 
> Gentoo and so nobody else is likely to solve this problem for us.
> 
> --
> Rich

Expanding the capabilities of app-admin/cpulimit to tally up memory usage and 
suspend new compile jobs temporarily when it gets too high probably wouldn't be 
too horribly difficult...

LMP


RE: [gentoo-user] Re: my 5.15.93 kernel keeps rebooting

2023-02-16 Thread Laurence Perkins



>-Original Message-
>From: John Covici  
>Sent: Wednesday, February 15, 2023 7:20 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] Re: my 5.15.93 kernel keeps rebooting
>
>On Wed, 15 Feb 2023 09:50:27 -0500,
>Grant Edwards wrote:
>> 
>> On 2023-02-14, Rich Freeman  wrote:
>> 
>> > Where are you getting this from, the system log/journal?  This 
>> > doesn't seem like a clean shutdown, so if it is a kernel PANIC I 
>> > wouldn't expect the most critical info to be in the log (since it 
>> > will stop syncing to protect the filesystem).  The details you need 
>> > probably will be displayed on the console briefly.  You can also 
>> > enable a network console, which will send the dmesg output 
>> > continuously over UDP to another device.  This won't be interrupted 
>> > by a PANIC unless there is some issue with the hardware or networking 
>> > stack.
>> 
>> If you've got a serial port[1], you could also set up serial logging. 
>> Though using serial ports have become a bit of a lost art, the serial 
>> console code in the kernel is pretty carefully designed to be the last 
>> man standing when things start to die. It's possible (though I 
>> wouldn't say probable) that a serial console will be able to show you 
>> stuff closer to the event horizon than a network console can.
>> 
>> Anyway, since still I'm in the serial port business (yes, there are 
>> still plenty of people using serial ports in industrial settings) I 
>> had to mention it...
>> 
>> [1] For this purpose you want a plain old UART on the motherboard type
>> seial port. You'd be surprised how many motherboards still have
>> them. Even though they're never brought out to a DB9 connector on
>> the back panel, there's often an 8-pin header on the edge of the
>> board somewhere, so you'd need one of these:
>> 
>> 
>> https://www.amazon.com/C2G-27550-Adapter-Bracket-Motherboards/dp/B0002
>> J27R8/
>
>I do have one which I use for my speech synthesizer.  I also have one on my 
>other box which I could hook up -- if I can find my null modem cable.  I think 
>I will try the netconsole first and the serial console if that does not work.
>
>Thanks for the hint.
>
>
https://wiki.gentoo.org/wiki/Kernel_Crash_Dumps  is another option if you're 
somehow not getting enough information out of the console.  More complex to set 
up, but you can take an actual debugger to the result and hopefully find out 
exactly what's going on.

LMP



RE: [gentoo-user] Glibc and binpackages

2023-01-17 Thread Laurence Perkins
> From: John Blinka john.bli...@gmail.com<mailto:john.bli...@gmail.com>
> Sent: Friday, January 13, 2023 8:17 AM
> To: gentoo-user@lists.gentoo.org<mailto:gentoo-user@lists.gentoo.org>
> Subject: Re: [gentoo-user] Glibc and binpackages
>
>
>
> > On Thu, Jan 12, 2023 at 12:54 PM Laurence Perkins 
> > lperk...@openeye.net<mailto:lperk...@openeye.net> wrote:
> > I’m not sure if I’m doing something horribly wrong, or missing something 
> > blindingly obvious, but I’ve just had to boot a rescue shell yet again, so 
> > I’m going to ask.
> >
> > To save time and effort, I have my big, powerful machine create binpackages 
> > for everything when it updates, and then let all my smaller machines pull 
> > from that.  It works pretty well for the most part.
>
> I do something quite similar, but have never had a glibc problem. Maybe the 
> problem lies in differences between the specific details of our two 
> approaches.
>
> I have 3 boxes with different hardware but identical portage setup, identical 
> world file, identical o.s., etc, even identical CFLAGS, CPPFLAGS and 
> CPU_FLAGS_X86 despite different processors. Like you, I build on my fastest 
> box (but offload work via distcc), and save binpkgs. After a world update 
> (emerge -DuNv —changed-deps @world) , I rsync all repositories and binpkgs 
> from the fast box to the others. An emerge -DuNv —changed-deps —usepkgonly 
> @world on the other boxes completes the update. I do this anywhere from daily 
> to (rarely) weekly. Portage determines when to update glibc relative to other 
> packages. There hasn’t been a problem in years with glibc.

I suspect it's the binpkg-only bit.  I keep USE flags and python versions and 
so-forth the same between the different machines, but the list of installed 
packages is not entirely identical.
I do sync the ebuild repository from the main machine so everybody has the same 
package list.  But the primary goal is to avoid building huge things like 
webkit over and over again.  Most of the little stuff hardly matters.
glibc issues aren't exactly a regular occurrence, even when I'm being careless. 
It's just that when they do happen it's painful enough to make me wonder if 
maybe glibc version should be included in the binpackage metadata or something 
as an extra safeguard.

>
> I believe there are more sophisticated ways to supply updated portage trees 
> and binary packages across a local network.  I think there are others on the 
> list using these more sophisticated techniques successfully. Just a plain 
> rsync satisfies my needs.
>

Oh dozens probably.  Big question is if you're happy with having a master copy 
on one machine and making all the modifications there, or if you need to be 
able to propagate changes from the clients back upstream automatically as well.

> It’s not clear to me whether you have the problem on  your big powerful 
> machine or on your other machines. If it’s the other machines, that suggests 
> that portage knows the proper build sequence on the big machine and somehow 
> doesn’t on the lesser machines. Why? What’s different?

The big machine is fine.  Emerge handles everything perfectly every time if 
there are no binpackages involved.  It's the little machines that occasionally 
seem to get confuzzled.  Maybe I should try always setting --with-bdeps=y so it 
doesn't think it can skip things...

>
>Perhaps there’s something in my update frequency or maintaining an identical 
>setup on all my machines that avoids the problem you’re having?
>
>If installing glibc first works, then maybe put a wrapper around your emerge? 
>Something that installs glibc first if there’s a new binpkg then goes on to 
>the remaining updates.

That's not a terrible idea.  Although running emerge twice every time in order 
to check would slow things down considerably.  Probably better to just get it 
through my thick head to update core libraries first.

>
>Just offered in case there’s a useful hint from my experience - not arguing 
>that mine is the one true way (tm).
>
>HTH,
>
>John Blinka

Thanks for the info.  Gives me some things to think about.

LMP


[gentoo-user] Glibc and binpackages

2023-01-12 Thread Laurence Perkins
I'm not sure if I'm doing something horribly wrong, or missing something 
blindingly obvious, but I've just had to boot a rescue shell yet again, so I'm 
going to ask.

To save time and effort, I have my big, powerful machine create binpackages for 
everything when it updates, and then let all my smaller machines pull from 
that.  It works pretty well for the most part.

But when there's a glibc update I have to specifically install it first.  If I 
don't, then about half the time emerge decides that because it doesn't have to 
worry about build dependencies for binpackages, it can obviously update glibc 
last...  Then it updates something that's needed for handling updates and 
that's it, stuck.  If I'm lucky it's a compile tool, not one for handling 
binpackages and I can just tell it to do the glibc package next.  Quite often 
though it's something like tar that's fundamental to installing anything at all 
and I have to ferry the new glibc over on a USB stick and unpack it with 
busybox...  Occasionally it's something critical for the whole system and then 
I have to boot to a rescue shell of some kind.

Think it's worth a feature request to have emerge prioritize glibc as high up 
in the list as it can when installing things?  Or am I the only one who runs 
into this?

LMP


RE: [gentoo-user] Looking for simple GUI MUA that works with ssmtp

2022-12-28 Thread Laurence Perkins



> -Original Message-
> From: Walter Dnes  
> Sent: Wednesday, December 28, 2022 9:17 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Looking for simple GUI MUA that works with ssmtp
> 
> On Mon, Dec 26, 2022 at 09:04:07PM -0500, Jack wrote
> 
> > Can't mutt open an html message or attachment in a browser?
> 
>   You can specify a program to use on attachments.  But this doesn't work 
> where the email itself is HTML.
> 
> --
> I've seen things, you people wouldn't believe; Gopher, Netscape with frames, 
> the first Browser Wars.  Searching for pages with AltaVista, pop-up windows 
> self-replicating, trying to uninstall RealPlayer.  All those moments, will be 
> lost in time like tears in rain... time to die.
> 

Is there an option to tell it to open the whole message in an external program? 
 If not, then that would seem like a relatively simple feature to add; and a 
rather useful one at that.

LMP



RE: [gentoo-user] Gaming on gentoo

2022-12-15 Thread Laurence Perkins
Note that some programs will dynamically load libraries at runtime, in addition 
to being linked against them at build time.  These don’t necessarily show up in 
ldd. (Qt is horrible about this) Enabling core dumps or running under a 
debugger might shed some light on what’s going on.  If that doesn’t work easily 
then check the project documentation to see if they have a special debugging 
workflow.  Some game engines require a very specific debugging setup for the 
output to make any sense.

LMP

From: Artur Tamm 
Sent: Wednesday, December 14, 2022 1:13 PM
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] Gaming on gentoo

Hi,

I finally had the time to test quakespasm. I tried both the binary from the 
project as well as the version compiled by me from the project page on github. 
After copying the binary into quake folder (GoG version) and making symlinks to 
pak0.pak and pak1.pak (filesystem is case sensitive and the software cannot 
handle it). It worked fine.

Artur

Here is the ldd output (don't know if it helps)
ldd quakespasm_compiled
linux-vdso.so.1 (0x7ffe3bd52000)
libm.so.6 => /lib64/libm.so.6 (0x7f2cd37c)
libGL.so.1 => /usr/lib64/libGL.so.1 (0x7f2cd373a000)
libvorbisfile.so.3 => /usr/lib64/libvorbisfile.so.3 (0x7f2cd373)
libvorbis.so.0 => /usr/lib64/libvorbis.so.0 (0x7f2cd3702000)
libogg.so.0 => /usr/lib64/libogg.so.0 (0x7f2cd36f7000)
libmad.so.0 => /usr/lib64/libmad.so.0 (0x7f2cd36d4000)
libSDL-1.2.so.0 => /usr/lib64/libSDL-1.2.so.0 (0x7f2cd3668000)
libc.so.6 => /lib64/libc.so.6 (0x7f2cd346b000)
/lib64/ld-linux-x86-64.so.2 (0x7f2cd3db6000)
libGLdispatch.so.0 => /usr/lib64/libGLdispatch.so.0 (0x7f2cd33b3000)
libGLX.so.0 => /usr/lib64/libGLX.so.0 (0x7f2cd337f000)
libasound.so.2 => /usr/lib64/libasound.so.2 (0x7f2cd328f000)
libdl.so.2 => /lib64/libdl.so.2 (0x7f2cd328a000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7f2cd3283000)
libX11.so.6 => /usr/lib64/libX11.so.6 (0x7f2cd313f000)
libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x7f2cd3115000)
libXau.so.6 => /usr/lib64/libXau.so.6 (0x7f2cd311)
libXdmcp.so.6 => /usr/lib64/libXdmcp.so.6 (0x7f2cd3108000)
libbsd.so.0 => /usr/lib64/libbsd.so.0 (0x7f2cd30ef000)
libmd.so.0 => /usr/lib64/libmd.so.0 (0x7f2cd30e1000)


On Tue, 13 Dec 2022 at 14:24, Alan Ianson 
mailto:agian...@gmail.com>> wrote:
On Tue, 13 Dec 2022 13:49:54 +
Artur Tamm mailto:artur.tamm...@gmail.com>> wrote:

> sourceforge tarball was a prebuilt binary (at least the one I downloaded).

The source is there too in a different directory.

> I saw that the source code is here:
> https://github.com/sezero/quakespasm/tree/master/Quake
> The default Makefile might need some editing for a successful compilation.

I just grabbed the linux-64 archive from sourceforge. I have never run the 
prebuilt binaries before but I tried it just to see what would happen. The 
quakespasm-sdl2 segfaulted the same as my own build but the quakespasm (sdl I 
suppose?) does run here.

So that is something! I can run their sdl quakespasm but not my own built here.

That is better than nothing but I'd still like to hear about your experience. 
I'd be happy if I could build and run my own if I can figure out what the issue 
is.



RE: [OT] Re: [gentoo-user] NAS and replacing with larger drives

2022-12-13 Thread Laurence Perkins



> -Original Message-
> From: Peter Humphrey  
> Sent: Sunday, December 11, 2022 3:35 AM
> To: gentoo-user@lists.gentoo.org
> Subject: [OT] Re: [gentoo-user] NAS and replacing with larger drives
> 
> On Friday, 9 December 2022 10:34:00 GMT I wrote:
> 
> > in the 1970s the national grid was monitored and analysed with a 
> > Ferranti Argus 500 machine with 24KB RAM and a 2MB disk. It was common 
> > for American visitors to believe that was just driving the control 
> > engineers'
> > displays, and where was the main computer?
> 
> Er... There was no RAM in those days, not of the type we know today. In fact 
> it was 2-microsecond core store. Each tiny ferromagnetic toroid was threaded 
> with one X wire, one Y wire and (I think it was) a sync pulse wire. A 
> remarkable labour of love to build such a thing.
> 
> --
> Regards,
> Peter.
> 
Well, it wasn't built with transistors, but it was Random Access Memory.  As 
opposed to Sequential Access Memory like mercury delay lines.  And it was 
periodic refresh, just like most modern RAM.

That 24KB though would have been literally 196,608 ferrite cores (assuming it 
was an 8 bit byte on that system), and they were probably hand-soldered.

Although it looks like the original Argus line used 12 bit words.  So it was 
probably a 6 bit byte.  Still, a lot of soldering.

Interestingly, the Argus 400 and 500 series was one of the first systems to use 
multilayer PCBs and the company had to develop a lot of the techniques for 
creating those themselves.

LMP



RE: [gentoo-user] NAS and replacing with larger drives

2022-12-08 Thread Laurence Perkins


> -Original Message-
> From: Rich Freeman  
> Sent: Thursday, December 8, 2022 6:12 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] NAS and replacing with larger drives
> 
> On Thu, Dec 8, 2022 at 8:59 AM Frank Steinmetzger  wrote:
> >
> > You could, but this is either a sink-hole for time, or you need to get 
> > up to speed with cross-compiling and binhosts. I went with the 
> > standard Debian and evaluate Arch from time to time. But I do run 
> > Gentoo on my DIY NAS with an i3-2000. Gentoo has ZFS in portage 
> > without overlays, which–for me–is one of its biggest appeals.
> 
> ++
> 
> Obviously I'm a huge Gentoo fan, but on an ARM SBC unless you're either 
> experimenting or you actually intend to be patching or reconfiguring packages 
> the precompiled option is the way to go.  When I'm using less-popular SBCs 
> (ie not Pis) then I will usually look for whatever distros are supporting it 
> in the most first-class way, again, unless I'm experimenting.  Then I look 
> for what has the software I need already packaged (again, check the arch 
> because a binary package repo doesn't necessarily include your device, 
> especially if it is 3rd party).  I've had to compile things on ARM SBCs and 
> it is SLOW.
> 
> I have the same philosophy with containers.  If I'm just running a service, 
> and not tweaking things, I'll just pick the least-fuss base for my container 
> whatever that is.
> 
> --
> Rich
> 
> 

Pine64 has an interesting array of SBCs which are both cheaper and (some are) 
possibly better suited to becoming a NAS than a Pi.  One of them even has a 
PCIe socket I think.

Compiling Gentoo on an SBC is usually a long, slow process, but if you don't 
mind setting up a cross-compile environment on a more powerful system and using 
some combination of distcc and/or binpackages then it's not too horrible.

LMP


[gentoo-user] Docker mounting strangeness.

2022-12-01 Thread Laurence Perkins
Starting here in case it’s something specific to Gentoo since I can’t find 
anybody complaining about the behaviour elsewhere on the web.

I’m setting up an image to use Docker, which will be deployed on multiple 
machines.  Since the containers will be large, I didn’t want them on the root 
partition.

Ok, easy enough, I just put it in fstab to mount a different partition on 
/var/lib/docker.  Presto, now I don’t have to worry about a container using up 
all the OS’s disk space.

That’s where it gets weird.

Deploy the image, boot it up, everything is mounted correctly.  But the first 
time (and ONLY the first time) I start Docker it somehow remounts the OS 
partition onto /var/lib/docker, and I end up with:

# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda  8:00   250G  0 disk
├─sda1   8:10   123M  0 part /boot/efi
├─sda2   8:20   3.8G  0 part
├─sda3   8:30  14.2G  0 part /var/lib/docker
│/
├─sda4   8:40  19.2G  0 part /var/opt
├─sda5   8:50   9.5G  0 part
└─sda6   8:60 203.1G  0 part /var/lib/docker

It puts all of Docker’s files on the OS partition at that point.  (⊙.☉)7

But if I turn Docker off, unmount everything from /var/lib/docker, and then 
remount just sda6 to /var/lib/docker, then when I restart Docker it puts 
everything on sda6 for me like I wanted, and continues to do so on reboots and 
Docker restarts after that…

Everything I find on the web is about how mounting stuff on /var/lib/docker 
*inside* a container will cause all kinds of problems…  But that’s not what I’m 
doing…  If it mangled the mounts every time, I could understand that too.  But 
this seems strange.  Is there some magic button I’m supposed to push to tell it 
not to mess with its data dir?

Running OpenRC by the way, if that makes a difference.

LMP


RE: [gentoo-user] Re: Upgrading from 5.14 to 6.0 version

2022-11-21 Thread Laurence Perkins


> -Original Message-
> From: Michael  
> Sent: Monday, November 21, 2022 9:24 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Re: Upgrading from 5.14 to 6.0 version
> 
> On Monday, 21 November 2022 16:50:14 GMT Grant Edwards wrote:
> > On 2022-11-21, Michael  wrote:
> > > On Monday, 21 November 2022 16:11:13 GMT Grant Edwards wrote:
> > >> I did have to give up the option of having multiple X11 screens. 
> > >> The proprietary NVidia driver supported multiple screens, but the 
> > >> drivers for built-in Intel and Radeon drivers don't seem to.
> > > 
> > > AMD APUs with embedded radeon graphics work fine here with two 
> > > monitors (DVI + HDMI ports).
> > 
> > Yes, multiple montors work fine with both Intel and Radeon embedded 
> > graphics with Xorg drivers.
> > 
> > It's multiple X11 screens that isn't supported.  An X11 screen is the 
> > entity that's managed by single window manager and comprises what's 
> > usually called "a desktop". A screen can include multiple monitors.
> > 
> > https://wiki.archlinux.org/title/multihead#Separate_screens
> 
> You're right, I thought you meant two different monitors in Xinerama style.  
> I didn't know anyone who still uses separate displays (screens) these days.
> 
Separate displays is useful for multi-headed systems.  I know a couple people 
who buy one, high-power desktop for the whole family and then attach multiple 
screens and input devices.
If you want to do that, but your GPU can't handle multiple X displays, you can 
still set it up by using one master X server, and then running multiple, nested 
X servers, each given a specific region (which may or may not correspond 
precisely to one or more screens, but that's usually what you'd want).  Attach 
the IO devices to the nested ones obviously.

LMP


RE: [gentoo-user] where is 'mke2fs' ?

2022-11-17 Thread Laurence Perkins


> -Original Message-
> From: Jack  
> Sent: Thursday, November 17, 2022 1:16 PM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] where is 'mke2fs' ?
> 
> On 11/17/22 15:30, Philip Webb wrote:
> > I need to rewrite the filesystem on a USB stick after it got damaged.
> > My notes tell me to use 'mke2fs', but 'which' doesn't find it :
> > has it been replaced by something else ?
> > I have 'e2fsprogs' installed, but there's no 'man' file for it & it 
> > doesn't seem to create filesystems.
> > There is a utility 'sys-fs/genext2fs', which I emerged & looked at, 
> > but it's not clear whether it does the simple job I need.
> >
> > Can anyone advise ?
> On my system, /sbin/mke2fs is part of sys-fs/e2fsprogs-2.46.5-r3 with the 
> tools USE FLAG set.
> 
Does your PATH perhaps simply not contain /sbin ?  Some people do that for...  
reasons...

LMP


RE: [gentoo-user] Libsld, what gives?

2022-11-16 Thread Laurence Perkins


>tortoise ~ # eclean-dist
>  * Building file list for distfiles cleaning...
>  * ERROR: games-strategy/boswars-::poly-c failed (depend phase):
>  *   EAPI 6 unsupported.

Possibly also need to upgrade portage and/or gentoolkit first.  That's usually 
a good idea regardless.

>Hmm, poly-c is looking like it's a common factor here.
>
>

Overlays can go stale, even if they were previously necessary to fix something. 
 For long-term use pulling just the ebuilds you need into your own overlay 
often works out better.

LMP


RE: [gentoo-user] Libsld, what gives?

2022-11-16 Thread Laurence Perkins

>-Original Message-
>From: Alan Grimes  
>Sent: Wednesday, November 16, 2022 8:18 AM
>To: gentoo-user@lists.gentoo.org; Andreas Fink 
>Subject: Re: [gentoo-user] Libsld, what gives?
>
>Andreas Fink wrote:
>> On Wed, 16 Nov 2022 02:11:18 -0500
>> Alan Grimes  wrote:
>>
>>> I'm jackhammering the system now and I'm getting about 50% error spew
>> I would suggest a different tool than a jackhammer to fix the problems.
>
>Basic assumption: Problems are caused by outdated packages.
>Underlying problem: In order to break a log-jam it is necessary to purge 
>outdated packages.
>Facts on the ground: The only way to make ANY progress in updating outdated 
>packages is to jackhammer the hell out of all packages in hopes that some of 
>them might sucessfully update which will, hopefully either:
>
>A. allow other packages to sucessfully update B. Expose something that can be 
>fixed.
>
>Once the logjam is broken, the system is then --emptytree world'ed and 
>declared healthy... Expected failure rate is on the order of 0.3% of hopefully 
>unimportant packages.
>
>It is known that the KDE group of packages has incomplete dependency graph 
>because it is hopelessly convoluted, and it always causes problems and the 
>only way to break through it is the jackhammer approach.
>
>LLVM and friends also fails to update when updating within a single slot and 
>this is VERY annoying...
>
>What causes me to post to the list is when I get failures that prevent me from 
>even beginning to jackhammer the system.
>
>--
>Beware of Zombies. =O
>#EggCrisis  #BlackWinter
>White is the new Kulak.
>Powers are not rights.
>

If you're going to try to dig all the way to the bottom first then 
--ignore-world and --ignore-built-slot-operator-deps can be helpful for forcing 
it to build what it needs to break a dependency loop.  But do be aware that 
things may cease to function during the intermediate stages.

If those are not sufficiently strong, then you can climb into the package repo 
and start issuing ebuild commands yourself and it will do what it's told 
without bothering about checking dependencies at all.  Make sure you know what 
you're doing...  Obviously...

But seriously, try disabling any overlays first if at all possible.  It's quite 
common for those to lag behind the main repo and turn things into a tangled 
mess.  

An expected failure rate "on the order of" 0.3% means you expect it to be no 
less than 0.03% and no more than 30%...  So I think you're probably already 
within that envelope.  :D

LMP


RE: [gentoo-user] Libsld, what gives?

2022-11-16 Thread Laurence Perkins



>-Original Message-
>From: Andreas Fink  
>Sent: Wednesday, November 16, 2022 6:59 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] Libsld, what gives?
>
>On Wed, 16 Nov 2022 02:11:18 -0500
>Alan Grimes  wrote:
>
>> I'm jackhammering the system now and I'm getting about 50% error spew
>I would suggest a different tool than a jackhammer to fix the problems.
>
>
A machete is a much better choice usually.  Chop out all the weeds like 
poorly-maintained overlays, programs you don't use any more, and (in extremity) 
the virtuals and perl.

Trying to go deep first and fix the core with all the brush and rubbish still 
cluttering up the top is usually an exercise in frustration and a good way to 
have some errant twig poke you in the eye repeatedly.

LMP



RE: [gentoo-user] Re: Libsld, what gives?

2022-11-16 Thread Laurence Perkins


> -Original Message-
> From: Grant Edwards  
> Sent: Wednesday, November 16, 2022 6:01 AM
> To: gentoo-user@lists.gentoo.org
> Subject: [gentoo-user] Re: Libsld, what gives?
> 
> On 2022-11-16, Alan Grimes  wrote:
> 
> > Even though only 45 days have passed since my last update, I felt like 
> > doing one tonight. Usually I should wait six months just to save 
> > myself the aggrivation...  (I'm looking to set up a local bitcoin 
> > wallet because the exchanges are not to be trusted
> > anymore...)
> >
> > Naturally nothnig worked because, hey, this is gentoo
> 
> One wonders why you continue to run Gentoo, since it's so awful and you hate 
> it so much.
> 
> --
> Grant
> 

Because it's the only thing that lets him make all the esoteric customizations 
he wants.  I have a brother who has the same kinds of troubles just about as 
often.  He just also knows that it's because he wants to compile everything 
with LLVM/Clang.

Unfortunately, the package manager that can account for everything the user 
might possibly have changed and make it compile anyway is still a work in 
progress.

LMP


RE: [gentoo-user] Re: Upgrading from 5.14 to 6.0 version

2022-11-15 Thread Laurence Perkins
> That may be true.  I used to not mind rebooting as much but since I started 
> having to use the init thingy, I only do it when really necessary.  Those 
> init thingys have left a long term bad taste in my mouth.  If I could, I'd 
> likely never reboot.  Thing is, sometimes I have a power outage and just have 
> too. 
> 
Kernel livepatching is a thing now...  Use the LTS kernel and you might not 
have to reboot for a couple years.

Documentation on the wiki is a bit sparse.  Please add your notes and results!

https://wiki.gentoo.org/wiki/Kpatch
https://wiki.gentoo.org/wiki/Live_patching
https://wiki.gentoo.org/wiki/Elivepatch

LMP


RE: [gentoo-user] Re: e2fsck -c when bad blocks are in existing file?

2022-11-14 Thread Laurence Perkins
> 
> 
> -Original Message-
> From: Grant Edwards  
> Sent: Saturday, November 12, 2022 7:55 PM
> To: gentoo-user@lists.gentoo.org
> Subject: [gentoo-user] Re: e2fsck -c when bad blocks are in existing file?
> 
> On 2022-11-12, Michael  wrote:
> 
> > Have your questions been answered satisfactorily by Lawrence's contribution?
> 
> Yes, Lawrence's experiment answered the my question: e2fsck adds the bad 
> block to the "bad block" inode and leaves it also allocated to the existing 
> file.
> 
> Presumably if you don't allow it to clone the block, reading that file will 
> return an error when it gets to the bad block. Once you delete that file, the 
> bad block will never get reallocated by the filesystem since it still belongs 
> to the bad block inode.
> 
> The failing SSD that prompted the question has now been replaced and a fresh 
> Gentoo system installed on the new drive. I never did figure out which files 
> contained the bad blocks (there were 37 bad blocks, IIRC). They apparently 
> didn't belong to any of the files I copied over to the replacement drive.
> 
> The old drive was a Samsung 850 EVO SATA drive, and the new one is a Samsung 
> 980 PRO M.2 drive. The new one is noticably faster than the old one (which in 
> turn was way faster than the spinning platter drive it had replaced).
> 
> --
> Grant

Multiply-allocated blocks won't cause an error by themselves.  They can just 
cause strange and unexpected munging of your data if two files are scribbling 
on the same patch of disk.  So if you leave it allocated to both then you can 
use a more intelligent tool to either coax one more read out of it or 
potentially replace the lost data with some substitute.

I'm not sure what fsck will do with a read error while cloning the block since 
my test "disk" wasn't actually bad.  Presumably fill the bad section with nulls.

LMP





RE: [gentoo-user] Upgrading from 5.14 to 6.0 version

2022-11-11 Thread Laurence Perkins


> -Original Message-
> From: Arve Barsnes  
> Sent: Friday, November 11, 2022 2:36 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Upgrading from 5.14 to 6.0 version
> 
> On Fri, 11 Nov 2022 at 11:30, Peter Humphrey  wrote:
> > I can't remember any difficulty going from the 5 series to 6.0.0 
> > either, even though it was a .0 version, which we all know is generally to 
> > be suspected.
> 
> Not when it comes to the linux kernel though, where major version changes are 
> arbitrary and comes around the x.19/20/21 switch no matter which new features 
> are in it.
> 
> Regards,
> Arve
> 
> 
I can respect the fact that they've decided to increment it more often since 
the code is changing so much, but the 2.x to 3.x switch really should have been 
when they decided to drop support for the 80386 processor, and they deprecate 
and drop things more than often enough these days that they could easily tie it 
to that rather than having it be arbitrary, and therefore meaningless.

Where it's a kernel it doesn't have to break everything for everyone to justify 
incrementing the major.  Breaking something for somebody would be sufficient.

LMP


RE: [gentoo-user] Upgrading from 5.14 to 6.0 version

2022-11-11 Thread Laurence Perkins
>
>
>
>P. S.  Bought yet another 14TB hard drive.  Working on filling it up now.  
>While my Cooler Master HAF-932 case is large, I need more drive bays.  Dang 
>cases are pricey right now.  :/


Bought one of these for a client's custom build recently.  Enough 5.25" bays to 
mount in two of my favorite 4-drive hotswap cages, plus then another 6 internal 
3.5" bays.

And the huge cooling fans don't have to spin very fast, so it's nice and quiet.

Just note that it barely fits under the average desk.


https://www.newegg.com/p/N82E16811147053

LMP


RE: [gentoo-user] Re: e2fsck -c when bad blocks are in existing file?

2022-11-09 Thread Laurence Perkins
Ok, so I decided to just go and test it myself.  

I created a 2MiB file and formatted it as ext4 and mounted it.
I created a single, 100KiB file with a test pattern in this filesystem, and 
then unmounted it.
I found the file in the raw storage with a hex editor, and computed a block 
offset in the middle of it.
I swapped out my badblocks executable with a dummy that just spits out that 
block as "bad".
I ran fsck.ext4 -c on the storage file.

Fsck.ext4 calls badblocks, which returns that one, "bad" block.  Fsck.ext4 then 
automatically adds that block to the badblocks inode immediately.
Then it continues with its checks, and when it gets to the part where it's 
checking for inode issues, it informs you that there is a block claimed by both 
a file inode and the badblocks inode and would you like to try to clone it into 
separate copies for each.

Presumably there's some logic in there to make sure that the new cloned copy 
goes to the file and not to the badblocks inode, but I can't be bothered to dig 
into it that far.  I'll just assume that the fsck developers have at least a 
half a clue.

Regardless, at that point the user gets to choose if they want fsck to try to 
fix it automatically, or if they want to use a more subtle tool to see if they 
can coax one more read out of the block in question.  But whether they clone 
the file, data-recovery-tool the file, or simply delete the file and move on, 
that bad sector is on the bad list and won't get allocated again.

Hopefully this answers the question sufficiently.

LMP

-Original Message-
From: Grant Edwards  
Sent: Wednesday, November 9, 2022 4:19 PM
To: gentoo-user@lists.gentoo.org
Subject: [gentoo-user] Re: e2fsck -c when bad blocks are in existing file?

On 2022-11-09, Wol  wrote:
> On 09/11/2022 23:31, Grant Edwards wrote:
>>> If I recall correctly, it will add any unreadable blocks to its 
>>> internal list of bad sectors, which it will then refuse to allocate 
>>> in the future.
>
> I doubt you recall correctly.

The e2fsck man page states explicitly that a -c read failure will cause the 
block to be added to the bad block inode. You're claiming that is not what 
happens?

> You should ONLY EVER conclude a block is bad if you can't write to it. 
> Remember what I said - if I read my 8TB drive from end-to-end twice, 
> then I should *expect* a read error ...

OK...

>> I'm asking what happens to the file containing the bad block. 
>> Perphaps nothing. The man page says the block is added to the "bad 
>> block inode". If that block was already allocated, is the bad block 
>> is now allocated to two different inodes?
>
> If a read fails, you SHOULD NOT do anything.

Thanks, but I'm not asking what I should do. I'm not asking what the filesystem 
should do.  I'm not asking what disk-drive controller firmware should do or 
does do with failed/spare blocks.

I'm asking what e2fsck -c does when the bad block is already allocated to an 
inode. Specifically:

  Is the bad block removed from the inode to which it was allocated?

  Is the bad block left allocated to the previous inode as well as
  being added to the bad block inode?

We've gotten lots of answers to lots of other questions, but after re-reading 
the thread a few times, I still haven't seen an answer to the question I asked.

> If a write fails, you move the block and mark the failed block as bad. 
> But seeing as you've moved the block, the bad block is no longer 
> allocated to any file ...

Are you stating e2fsck -c will removed bad block from the inode to which it was 
allocated before the scan? Is it replaced with a different block? Or just left 
as an empty "hole" that can't be read from or written to?

The e2fsck man page does not state that the bad block is removed from the old 
inode, only that that bad block is added to the bad block inode.

If a block is allocated to an inode, I would call that "allocated to a file". 
It's not a file that has a visible name that shows up in a directory, but it's 
still a file.

--
Grant







RE: [gentoo-user] e2fsck -c when bad blocks are in existing file?

2022-11-09 Thread Laurence Perkins

>
>-Original Message-
>From: Michael  
>Sent: Wednesday, November 9, 2022 12:47 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] e2fsck -c when bad blocks are in existing file?
>
>On Tuesday, 8 November 2022 18:24:41 GMT Wols Lists wrote:
>
>> MODERN DRIVES SHOULD NEVER HAVE AN OS-LEVEL BADBLOCKS LIST. If they 
>> do, something is seriously wrong, because the drive should be hiding 
>> it from the OS.
>
>If you run badblocks or e2fsck you'll find the application asks to write data 
>to the disk, at the end of the run.  Yes, the drive's firmware should manage 
>badblocks transparently to the filesystem, but I have observed in hdparm 
>output reallocations of badblocks do not happen in real time.  Perhaps the 
>filesystem level badblocks list which is LBA based, acts as an intermediate 
>step until the hardware triggers a reallocation?  Not sure.  :-/
>
>
Badblocks doesn't ask to write anything at the end of the run.  You tell it 
whether you want a read test, a write-read test or a read-write-read-replace 
test at the beginning.

The drive's firmware will mark sectors as bad when a read fails.  They will 
then get reallocated if a subsequent write cycle also fails.  (Because, for 
example, I have a couple drives that have been running for over ten years and 
recently a couple sectors that hadn't been written in all that time bitrotted, 
but the surface was still fine even though the magnetism that had been there 
before had faded, so no need to reallocate.)

Controllerless drives that are managed at the FS layer do still exist, you just 
don't normally see them outside of embedded devices or really cheap flash 
drives.

Also, you might still want to use a bad blocks list on rotational drives if 
performance is paramount.  When a sector has been remapped then any time you 
try to read that sector, the drive has to seek over to the spare sector that 
actually has the data, and then seek back to the original location.  If that 
sector is marked as bad instead then the smarter filesystems will avoid it.  
Best case it's smart enough to avoid an extent crossing the boundary, worst 
case the seek distance is, at least, shorter and therefore quicker.

LMP


RE: [gentoo-user] Re: e2fsck -c when bad blocks are in existing file?

2022-11-08 Thread Laurence Perkins



>-Original Message-
>From: Grant Edwards  
>Sent: Tuesday, November 8, 2022 6:28 AM
>To: gentoo-user@lists.gentoo.org
>Subject: [gentoo-user] Re: e2fsck -c when bad blocks are in existing file?
>
>On 2022-11-08, Michael  wrote:
>> On Tuesday, 8 November 2022 03:31:07 GMT Grant Edwards wrote:
>>> I've got an SSD that's failing, and I'd like to know what files 
>>> contain bad blocks so that I don't attempt to copy them to the 
>>> replacement disk.
>>> 
>>> According to e2fsck(8):
>>> 
>>>-c This option causes e2fsck to use badblocks(8)  program  to  do
>>>  a read-only scan of the device in order to find any bad blocks.  If 
>>> any bad blocks are found, they are added to the bad  block  inode to  
>>> prevent them from being allocated to a file or directory.  If this 
>>> option is specified twice, then the bad block scan  will  be done 
>>> using a non-destructive read-write test.
>>> 
>>> What happens when the bad block is _already_allocated_ to a file?
>
>> Previously allocated to a file and now re-allocated or not, my 
>> understanding is with spinning disks the data in a bad block stays 
>> there unless you've dd'ed some zeros over it.  Even then read or write 
>> operations could fail if the block is too far gone.[1]  Some data 
>> recovery applications will try to read data off a bad block in 
>> different patterns to retrieve what's there.  Once the bad block is 
>> categorized as such it won't be used by the filesystem to write new data to 
>> it again.
>
>Thanks. I guess I should have been more specific in my question.
>
>What does e2fsck -c do to the filesystem structure when it discovers a bad 
>block that is already allocated to an existing inode?
>
>Is the inode's chain of block groups left as is -- still containing the bad 
>block that (according to the man page) "has been added to the bad block 
>inode"?  Presumably not, since a block can't be allocated to two different 
>inodes.
>
>Is the "broken" file split into two chunks (before/after the bad
>block) and moved to the lost-and-found?
>
>Is the man page's description only correct when the bad block is currently 
>unallocated?
>
>--
>Grant

If I recall correctly, it will add any unreadable blocks to its internal list 
of bad sectors, which it will then refuse to allocate in the future.  

I don't believe it will attempt to move the file to elsewhere until it is 
written since:
A)  what would you then put in that block?  You don't know the contents.  
B)  Moving the file around would make attempts to recover the data from that 
bad sector significantly more difficult.

This is, however, very unlikely to come up on a modern disk since most of them 
automatically remap failed sectors at the hardware level (also on write, for 
the same reasons).  So the only time it would matter is if you have a disk 
that's more than about 20 years old, or one that's used up all its spare 
sectors...

Unless, of course, you're resurrecting the old trick of marking a section of 
the disk as "bad" so the FS won't touch it, and then using it for raw data of 
some kind...

You can, of course, test it yourself to be certain with a loopback file and a 
fake "badblocks" that just outputs your chosen list of bad sectors and then see 
if any of the data moves.  I'd say like a 2MB filesystem and write a file full 
of 00DEADBEEF, then make a copy, blacklist some sectors, and hit it with your 
favorite binary diff command and see what moved.  This is probably recommended 
since there could be differences between the behaviour of different versions of 
e2fsck.

LMP







RE: [gentoo-user] Change History of linux commands

2022-10-07 Thread Laurence Perkins
> -Original Message-
> From: tastytea  
> Sent: Friday, October 7, 2022 8:48 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Change History of linux commands
> 
> On 2022-10-07 17:25+0200 n952162  wrote:
> 
> > Am 07.10.22 um 16:56 schrieb Grant Taylor:
> > > On 10/7/22 8:25 AM, n952162 wrote:  
> > >> Can anybody tell me how I can look at the official change history 
> > >> of linux commands?
> > >
> > > Some man pages have history of commands in them.
> > >
> > > Admittedly, it seems as if man pages on Solaris and *BSD (I have 
> > > access to FreeBSD) tend to be better than Linux man page at this 
> > > aspect.
> > >
> > >
> > >  
> > 
> > Well, the man page, yes, would be a good indicator, but the commands 
> > themselves?
> > 
> > Where does gentoo get the source to build  test(1) or expr(1) or 
> > date(1)?That's in some package, but where is the upstream source?
> > Is it something in github?  Or a linux portal?  Or Torvalds private 
> > server?  Or the gnu server?
> > 
> > 
> 
> /usr/bin/test[1] was installed by sys-apps/coreutils[2], it's homepage is 
> [3], that links to the source code 
> repository.
> 
> Other ways to find out:
>   - `equery meta sys-apps/coreutils`
>   - `less $(portageq get_repo_path / 
> gentoo)/sys-apps/coreutils/coreutils-8.32-r1.ebuild`
> 
> Kind regards, tastytea
> 
> [1] `whereis test`
> [2] `qfile /usr/bin/test` or `equery belongs /usr/bin/test` [3] `eix 
> sys-apps/coreutils` or emerge -s sys-apps/coreutils`
> 

Note also that several of these may have copies built into your shell for speed 
and so that you can update the system utilities without an outage.

"bash -c help" or "busybox --help" or similar to see the list.

LMP


RE: [gentoo-user] seems lots of packages no longer in the tree

2022-09-26 Thread Laurence Perkins
>
>with similar recommendations for the other packages you mention, except for 
>oracle-jdk where it recommends alternatives.
>
>
>--
>Neil Bothwick
>
>ISDN: It Still Does Nothing
>

In Oracle's case, they're doing their very best to turn the licensing for every 
one of their products into a minefield underneath a quagmire with prices only a 
major corporation can afford.  I'm not surprised nobody wants it around anymore.

LMP



RE: [gentoo-user] Separate /usr partition

2022-09-15 Thread Laurence Perkins


>-Original Message-
>From: Neil Bothwick  
>Sent: Thursday, September 15, 2022 10:35 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] Separate /usr partition
>
>On Thu, 15 Sep 2022 05:00:25 -0500, Dale wrote:
>
>> As I said, I do all mine by hand.  I don't use make install etc.  
>> After I build my kernel, I copy it and name it something like this, 
>> from /boot.
>
>[snip]
>> Once I get that done, I then build the init thingy.  This is Neil's 
>> command that he posted.  From my understanding, the kernel symlink 
>> needs to point to the correct kernel version.
>> 
>> dracut --kver=$(cat include/config/kernel.release)
>
>This should be run from within the kernel source directory. It picks up the 
>version from the source so the symlink is irrelevant.
>
>> That gives a init thingy with a somewhat generic name.  I then rename 
>> it to match the kernel, looks something like this from /boot.
>
>If you use make install to install the kernel, dracut gives the initramfs a 
>matching name.
>
>--
>Neil Bothwick
>
>Time is the best teacher; unfortunately it kills all its students.
>
Note also that the "init thingy" is nothing more than a little filesystem which 
contains everything you'd need to make sure was on your root partition if you 
were attempting to boot without one.

Building one yourself is generally pretty trivial if you don't like what 
dracut/genkernel produce.  Utilities, kernel modules, and a script to set up 
your main system and switch to it.

At the other end of the spectrum, if you have enough memory, you can just put 
your whole root filesystem into it and run from there...  Did that once for a 
secure processing system.  Everything up through X11 and a web browser all in 
the initramfs.

LMP


RE: [gentoo-user] Re: Full battery laptop only 1 hour

2022-09-15 Thread Laurence Perkins


>-Original Message-
>From: Michael  
>Sent: Thursday, September 15, 2022 11:01 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] Re: Full battery laptop only 1 hour
>
>On Thursday, 15 September 2022 18:10:39 BST Laurence Perkins wrote:
>> Note that most batteries these days in anything more complex than a 
>> watch have "smart" charge controllers and so upower or similar can 
>> read what their design watt-hours and current maximum capacity are.  
>> Also, often the total charge or discharge rate.  That plus a little 
>> math should tell you if it's an aging battery or if your machine is 
>> simply failing to idle down for some reason.
> 
>> LMP
>
>Larger capacity batteries have multiple banks in them connected in parallel.  
>Some times one of the banks or its controller(?) fails and while the rest 
>continue to work, the loss in capacity is a noticeable step change.  I recall 
>suddenly losing ~1/3 of the battery capacity on a laptop just 3 or so happy 
>years into its life.  The remaining of the battery capacity continued to 
>degrade slowly and gradually over many years.  So notwithstanding the high 
>consumption identified by the OP the software causes of which should be 
>investigated, there could be also a problem with the battery unit itself.
>
>BTW, short & frequent top ups of lithium-ion batteries is the best approach to 
>their charging, while deep discharge can guarantee a shorter effective life.
>
At the same time, don't make it too short.  The charger has to run for a few 
seconds to a few minutes to determine that the battery is, in fact, full, and 
repeated overcharging in that manner will destroy the battery in short order.  
Let it run down at least a few percent before you plug it in again.

Their lifetime is generally happiest if you keep them between 50 and 80%.  Some 
packs automatically cut off the charging at the 80% mark and just tell you that 
it's full in order to increase the cycle count.

LMP


RE: [gentoo-user] Re: Full battery laptop only 1 hour

2022-09-15 Thread Laurence Perkins
Note that most batteries these days in anything more complex than a watch have 
"smart" charge controllers and so upower or similar can read what their design 
watt-hours and current maximum capacity are.  Also, often the total charge or 
discharge rate.  That plus a little math should tell you if it's an aging 
battery or if your machine is simply failing to idle down for some reason.

LMP

-Original Message-
From: Frank Steinmetzger  
Sent: Tuesday, September 13, 2022 4:46 PM
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] Re: Full battery laptop only 1 hour

Am Mon, Sep 12, 2022 at 01:51:39PM -0700 schrieb Mark Knecht:
> On Mon, Sep 12, 2022 at 1:40 PM Nuno Silva  wrote:
> >
> > On 2022-09-12, Guillermo García wrote:
> >
> > > Hello guys,
> > >
> > > I bought a laptop and i got like 4 hours of batter life, 
> > > everything ok, (using more than 1 vm, etc), however now in idle my 
> > > laptop has only 1 hour of life, which is really annoying because 
> > > its a brand new laptop bought one year before.
> >
> > Did anything change? Is this the same system/install which used to 
> > last
> > 4 hours on idle? Or, when you say "brand new bought one year 
> > before", you mean it wasn't used before?
> >
> > --
> > Nuno Silva
> >
> 
> Battery life can change over time. I've had batteries that after a 
> couple of years just didn't last as long. I've purchased a few 
> replacement batteries from Amazon and one of them didn't hold charge at all.

My Thinkpad is 6¼ years old and the batteries it shipped with are at 72 and
75 % of their original capacity. But I didn’t use them *that* much, and always 
kept them betweet 40 and 80 % charge when I didn’t need them, which is probably 
98 % of the year.

> 1 year is pretty short but possibly he might buy a new battery as a 
> test. They generally aren't overly expensive.

I don’t believe that they went down to 25 % of their original capacity within a 
year. To achieve that, they must have endured unspeakable abuse.

--
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The three main languages in India: Hindi, English and HTML.


RE: [gentoo-user] Separate /usr partition

2022-09-14 Thread Laurence Perkins



>-Original Message-
>From: Peter Humphrey  
>Sent: Wednesday, September 14, 2022 8:51 AM
>To: gentoo-user@lists.gentoo.org
>Subject: [gentoo-user] Separate /usr partition
>
>CAUTION: This is an EXTERNAL email. Do not click links or open attachments 
>unless you recognize the sender and know the content is safe.
>
>Hello list,
>
>I'm thinking of separating /usr onto its own partition so that I can have it 
>mounted read-only except while updating it. I'd prefer not to have to make an 
>init thingy, not having needed one up to now. Besides, some machines have 
>things like early-ucode or amd-uc.
>
>I've found a few guides on the Web, but I'm still confused. Is there a simple 
>guide to doing this? I know of at least one subscriber here who's done it.  :)
>
>--
>Regards,
>Peter.

Everything necessary to boot the system and mount drives is supposed to be in 
/etc /lib* and /sbin  So a separate /usr is theoretically just a matter of 
putting your /usr device into fstab.

If something you need for booting with separate /usr is missing that would be a 
FSH bug and is probably worth reporting unless you're doing something truly 
arcane with your system.

LMP





RE: [gentoo-user] Replace 8TB drive with 10TB drive on encrypted LVM

2022-09-14 Thread Laurence Perkins

>By the way, if someone wants to take this and make a how to out of it
>somewhere, I think it would be great.  This is doable.  I even rebooted
>and even tho the drives changed SATA ports, it worked fine.  So, I guess
>I did it right, even if no one thought it could be done.  lol 
>
>I really hope this helps someone else.  My brain hurts a little.  :/
>
>Dale
>
>:-)  :-) 

If you want, you can slap it into your personal page on the wiki and then it 
will show up in searches when people go looking for how to do things with LVM 
and/or cryptsetup.

LMP


RE: [gentoo-user] RE: [gentoo-user] Encrypted hard drives on LVM and urgent power shutdowns.

2022-09-13 Thread Laurence Perkins


>> If you want an arbitrarily large battery bank, just get a decent power 
>> inverter heavy enough to run your load and a battery float charger that can 
>> push enough amps to keep up, then put as big a stack of batteries as you 
>> like between the two.  The nicer inverters will even warn you when the 
>> batteries get low.
>
>> You can often get used batteries from the local automotive shop for 
>> just the core charge.  Just because it can't provide 600 amps to start a car 
>> any more doesn't mean it can't provide 60 to run your computers.  Obviously 
>> they'll require more regular maintenance, but it's hard to beat the price.
>
>> LMP 
>
>How would you physically connect the automotive battery to the computer, and 
>would you need the shell of the old UPS?
>
>I have an old Tripp-Lite UPS, batteries are dead and no longer rechargeable.
>
>Would you connect only the computer, or would you connect the monitor as well? 
> Would you connect networking equipment?
>
>Tom
>
>

If your old UPS had sufficient capacity, you can usually just open it up (make 
sure it's unplugged and turned off first, obviously) and replace the battery 
with one of the same size.  Often there's an access panel for doing exactly 
that.

Otherwise, you'll need to extend the battery leads out through the side of the 
case to whatever battery bank you're using.  Keeping the case would definitely 
be recommended since there will be high voltage in the system when it's in 
operation.  Also the new battery bank will need to be the same chemistry and 
voltage as the original.  Usually it's lead-acid and multiples of 12 volts, 
(hence why old automotive batteries can work, but will require test and service 
after every power outage because they're designed for high output, not deep 
discharge) but do double check.

What you connect to it depends on what you figure needs battery backup and how 
many amps the inverter in the UPS can supply without overheating.

Note that messing too much with the internals of a power inverter can cause 
fire or electrocution if done improperly, so make sure you research how it all 
works before modifying anything.

LMP



RE: [gentoo-user] Encrypted hard drives on LVM and urgent power shutdowns.

2022-09-12 Thread Laurence Perkins


>-Original Message-
>From: Dale  
>Sent: Sunday, September 11, 2022 10:58 PM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] Encrypted hard drives on LVM and urgent power 
>shutdowns.
>
>William Kenworthy wrote:
>> If your using nut, it has to be setup - and should be regularly tested 
>> to make sure it works.
>>
>> BillK
>
>
>I think upsmon is part of nut.  I keep forgetting that since the service is 
>ups something.  Thing is, I've never quite figured out how to test it without 
>unplugging and running down the batteries.  I do have everything configured 
>and have ever since I built this puter.  I did the same on previous puter and 
>UPS.  Pretty sure it will work.  When power blinks etc, it does notice it and 
>logs it in messages file. Also, the upsc command outputs the info correctly 
>when run.
>
>I wish I could send a command to the UPS to fake a power failure, wait say one 
>minute and then it tell puter to shutdown all on its own.  If it does it 
>correctly, it should work in the event of a actual power failure and not run 
>down my batteries either.  I try to keep the batteries topped off at all times 
>since we do on occasion have some crazy driver not watching where he/she is 
>going and hits a power pole.  Doesn't do the pole any good and the car seems 
>to not enjoy it either.  :/ Sometimes the driver is no longer caring about it. 
>
>I actually wish I had a much larger external battery.  Thing is, I'm concerned 
>about the charging bit.  The charging section in a UPS isn't really that 
>powerful since it mostly just keeps the batteries topped off.  I have 7Amp/hr 
>batteries and I think it takes like 7 or 8 hours to charge from almost dead.  
>I'd guess it is at most a 1 to 1.5 amp charging circuit. 
>
>If you know of a command to test without running down batteries, I have a 
>Cyberpower UPS and I'm certainly interested.  It's only a decade or so old so 
>has quite a few features.  There may be a way to do this but I've yet to find 
>it. 
>
>Thanks.
>
>Dale
>
>:-)  :-)
>

Some of the higher-end UPS models do have diagnostic modes for simulating 
various events to make sure the connected systems behave as desired.  A very 
few of the consumer-grade ones do as well.  But how to do it is model specific, 
so you'll have to dig up the documentation.

Commercial-grade units also often have a DC port on the back so you can plug in 
larger battery banks and/or hotswap battery banks during extended outages.

If you want an arbitrarily large battery bank, just get a decent power inverter 
heavy enough to run your load and a battery float charger that can push enough 
amps to keep up, then put as big a stack of batteries as you like between the 
two.  The nicer inverters will even warn you when the batteries get low.

You can often get used batteries from the local automotive shop for just the 
core charge.  Just because it can't provide 600 amps to start a car any more 
doesn't mean it can't provide 60 to run your computers.  Obviously they'll 
require more regular maintenance, but it's hard to beat the price.

LMP


RE: [gentoo-user] Getting maximum space out of a hard drive

2022-08-22 Thread Laurence Perkins
Note that 60ish MB/sec is very reasonable for a rotational drive.  They *can* 
technically go faster, but only if you keep the workload almost entirely 
sequential.  Most filesystems require a fair amount of seeking to write 
metadata, which slows them down quite a bit.

If you're desperate for performance, you can do things like tell it to ignore 
write barriers and turn off various bits of flushing and increase the amount of 
allowed dirty write cache.  These can be good for a significant performance 
boost at the cost of almost certainly corrupting the filesystem if the system 
loses power or crashes.

LMP

-Original Message-
From: Grant Taylor  
Sent: Saturday, August 20, 2022 2:57 PM
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] Getting maximum space out of a hard drive

Sorry for the duplicate post.  I had an email client error that accidentally 
caused me to hit send on the window I was composing in.

On 8/20/22 1:15 PM, Dale wrote:
> Howdy,

Hi,

> Related question.  Does encryption slow the read/write speeds of a 
> drive down a fair amount?

My experience has been the opposite.  I know that it's unintuitive that 
encryption would make things faster.  But my understanding is that it alters 
how data is read from / written to the disk such that it's done in more 
optimized batches and / or optimized caching.

This was so surprising that I decrypted a drive / re-encrypted a drive multiple 
times to compare things to come to the conclusion that encryption was 
noticeably better.

Plus, encryption has the advantage of destroying the key rendering the drive 
safe to use independent of the data that was on it.

N.B. The actual encryption key is encrypted with the passphrase.  The 
passphrase isn't the encryption key itself.

> This new 10TB drive is maxing out at about 49.51MB/s or so.

I wonder if you are possibly running into performance issues related to 
shingled drives.  Their raw capacity comes at a performance penalty.

> I actually copied that from the progress of rsync and a nice sized 
> file.  It's been running over 24 hours now so I'd think buffer and 
> cache would be well done with.  LOL

Ya, you have /probably/ exceeded the write back cache in the system's memory.

> It did pass both a short and long self test.  I used cryptsetup -s 512 
> to encrypt with, nice password too.  My rig has a FX-8350 8 core 
> running at 4GHz CPU and 32GBs of memory.  The CPU is fairly busy.  A 
> little more than normal anyway.  Keep in mind, I have two encrypted 
> drives connected right now.

The last time I looked at cryptsetup / LUKS, I found that there was a [kernel] 
process per encrypted block device.

A hack that I did while testing things was to slice up a drive into multiple 
partitions, encrypt each one, and then re-aggregate the LUKS devices as PVs in 
LVM.  This surprisingly was a worthwhile performance boost.

> Just curious if that speed is normal or not.

I suspect that your drive is FAR more the bottleneck than the encryption itself 
is.  There is a chance that the encryption's access pattern is exascerbating a 
drive performance issue.

> Thoughts?

Conceptually working in 512 B blocks on a drive that is natively 4 kB sectors.  
Thus causing the drive to do lots of extra work to account for the other seven 
512 B blocks in a 4 kB sector.

> P. S.  The pulled drive I bought had like 60 hours on it.  Dang near new.

:-)



--
Grant. . . .
unix || die



RE: [gentoo-user] Backup program that compresses data but only changes new files.

2022-08-15 Thread Laurence Perkins


>-Original Message-
>From: Rich Freeman  
>Sent: Monday, August 15, 2022 12:52 PM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] Backup program that compresses data but only 
>changes new files.
>
>On Mon, Aug 15, 2022 at 3:41 PM Dale  wrote:
>>
>> Glad to know what I found was good info.  I just wonder how long it 
>> will be before even 10TB drives will be SMR.  I also dread having to 
>> search out a 14TB drive later.  :/
>>
>
>I think it will be a long time if ever, and here is why.
>
>There are good reasons and bad reasons to use SMR.  The reason you would WANT 
>to use SMR is that you have a task that is well-suited to their limitations 
>like backup or applications that can use log-style storage.  Ideally you'd 
>want host-managed SMR for this.  The benefit is higher density for the cost, 
>so you'd be doing it to get a drive that is cheaper than it otherwise would 
>be.  However, these are all things that would appeal to experts who really 
>know what they're doing.
>
>The bad reason to use SMR is that you're a manufacturer trying to squeeze out 
>a bit more profit margin, not passing on the savings.  In this case you want 
>to sell the drive to somebody who DOESN'T know what they're doing, and make it 
>drive-managed.
>
>This is why we've seen SMR in medium-sized drives and not big ones as would be 
>expected if you assumed it would be employed for the good reasons.  The only 
>people buying 14TB hard drives are people who tend to know what they're doing, 
>which makes them less of a target for unscrupulous manufacturers.  You 
>wouldn't see them as much in small drives as the return in capacity isn't as 
>much.  The medium sized drives are big enough to get a return out of using 
>SMR, but small enough that suckers will be willing to buy them.
>
>At least, that's my theory...
>
>--
>Rich
>
>

A big chunk of it is that, when SMR drives came out, there was no reliable OS 
support for it, so it basically had to be drive-managed.  Which then had 
horrible performance, and the cherry on top was that the drive manufacturers 
tried to cover up what they'd changed.  So that made lots of the big companies 
doing big storage applications decide that SMR was crap and they simply will 
not buy SMR drives at any price.

Which is a real pity because there are lots of large-data applications where 
the write order is pretty much entirely sequential, so a properly designed, 
host managed system would see virtually no performance loss from SMR, and be 
able to take advantage of the higher density.

The moral is:  be transparent with your customers.

LMP


RE: [gentoo-user] Backup program that compresses data but only changes new files.

2022-08-15 Thread Laurence Perkins


>-Original Message-
>From: Dale  
>Sent: Monday, August 15, 2022 12:47 PM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] Backup program that compresses data but only 
>changes new files.
>
>Wol wrote:
>> On 15/08/2022 10:45, John Covici wrote:
>>> zfs would solve your problem of corruption, even without versioning.
>>> You do a scrub at short intervals and at least you would know if the 
>>> file is corrupted.  Of course, redundancy is better, such as 
>>> mirroring and backups take a very short time because sending from one 
>>> zfs to another it knows exactly what bytes to send.
>>
>> I don't think he means a corrupted file, he means a corrupted video.
>> If the drive faithfully records the corrupted feed, the filesystem is 
>> not going to catch it!
>>
>> Cheers,
>> Wol
>
>Yep.  Every once in a while, I download a video with better resolution later 
>to find out it is bad.  It gets part way through and crashes, stops dead and 
>sits there or just plain doesn't open.  Quite often, it will have the correct 
>thumbnail so it looks good but it's bad.  If I've already trashed the old one 
>and updated my backups, I have to go find it again.  Given how some sites 
>censor stuff, it could be gone for good. Generally, I can either catch it in 
>the trash or on the backup that hasn't been updated yet.  Given time, I'll 
>miss one one day. 
>
>The issues having a lot of files causes.  lol 
>
>Dale
>
>:-)  :-) 

You might consider just running ffmpeg or something headless like that over the 
file to see if there are errors before trashing the old version.  Should even 
be pretty easy to script.  If pass, replace old file, if fail raise notice box. 
Then you don't have to sit and wait for results.

LMP


RE: [gentoo-user] Backup program that compresses data but only changes new files.

2022-08-15 Thread Laurence Perkins
>>
>
>
>Duplicity sounds interesting except that I already have the drive encrypted.  
>Keep in mind, these are external drives that I hook up long enough to complete 
>the backups then back in a fire safe they go.  The reason I mentioned being 
>like rsync, I don't want to rebuild a backup from scratch each time as that 
>would be time consuming.  I thought of using Kbackup ages ago and it rebuilds 
>from scratch each time but it does have the option of compressing.  That might 
>work for small stuff but not many TBs of it.  Back in the early 90's, I 
>remember using a backup software that was incremental.  It would only update 
>files that changed and would do it over several floppy disks and compressed it 
>as well.  Something like that nowadays is likely rare if it exists at all 
>since floppies are long dead.  I either need to split my backup into two 
>pieces or compress my data.  That is why I mentioned if there is a way to 
>backup first part of alphabet in one command, switch disks and then do second 
>part of alphabet to another disk. 
>
>Mostly, I just want to add compression to what I do now.  I figure there is a 
>tool for it but no idea what it is called.  Another method is splitting into 
>two parts.  In the long run, either should work and may end up needing both at 
>some point.  :/   If I could add both now, save me some problems later on.  I 
>guess.
>
>I might add, I also thought about using a Raspberry Pi thingy and having sort 
>of a small scale NAS thing.  I'm not sure about that thing either tho.  Plus, 
>they pricey right now.  $$$
>
>Dale
>
>:-)  :-)
>

Ok, so you have a few options here.  Duplicity and Borg seem to be two of the 
most popular, and with good reason.  They are quite powerful.  

Duplicity due to the massive number of storage backends it supports, meaning 
that the difference between backing up to your on-site disks or shooting it off 
over the Internet to practically any storage service you care to think of is 
one parameter.  (And I recommend, if nothing else, coordinating with a friend 
in a different city to do precisely this.  Fire safes are good to have, but the 
contents don't always survive a really big fire.)

Borg is more picky, it only directly works to a local disk or via ssh.  But 
that's because it has a potent, chunk-based storage algorithm similar to what 
rsync uses to save transfer bandwidth.  It's very good at finding duplicate 
files, or even duplicate pieces of files, and storing them only once.  This 
makes it amazingly good for things like VM images or other large files which 
accumulate small changes over time, or full OS backups (you'd be amazed how 
many duplicate files there are across a Linux OS).

Now, if you want to stick with old stuff that you thoroughly understand, that's 
fine too.  For a dirt simple program capable of incremental backups and 
splitting the archive between disks you're looking for...

wait for it...

tar.

It's ability to detect files which have changed is largely dependent on 
filesystem timestamps and the archive bit, so you have to make sure your usage 
pattern respects those.  And it doesn't really do deduplication.  But it 
actually has a reasonable set of backup features, including archive splitting.  
Your backup storage doesn't even need to support random access, and doesn't 
even need a filesystem.  A bunch of my backups are on BD-REs You just tell tar 
how big the disk is, pop it in, and hit go.  When it's full it asks for another 
one.  There are a few updated versions of tar which add things like indexes for 
fast seeking and other features which are handy on large data sets.

Personally these days I tend to use Borg, because it deduplicates really well, 
and archives can be thinned out in any order.  It's also useful that you can 
put the backup archive in "append only" mode so that if anyone gets ransomware 
onto your system it's much more difficult for them to corrupt your backups.

The other thing is data integrity checking on your storage.  Yes, disks have 
built-in ECC, but it's not terribly good.  As annoying as it might be to have 
to hook up more than one disk at a time, BTRFS RAID triggers not only on 
complete read failures, but also keeps additional checksums such that it can 
detect and recover even single bit flips.  And it supports in-line compression. 
 (How well that works obviously depends on how compressible your data is.)  You 
can do similar things with LVM and/or mdraid, but the BTRFS checksums are the 
most comprehensive I've seen so far.

For optical media there's dvdisaster which can generate Reed-Solomon redundancy 
data in a variety of ways.  (Yes, I know, nobody uses optical any more...  But 
what other storage is easily available that's EMP-proof?  Solar flares can be 
wicked when they happen.)  

And there's one more, that I haven't used in years, and I'm not sure how well 
it would work with Gentoo, but it was still alive as of 2020.  mondorescue.org 
is an interesting 

RE: [gentoo-user] About to have fiber internet and need VPN info

2022-08-09 Thread Laurence Perkins
>
>> 
>> Thank goodness I don't live in the good ol' US of A. The land of the 
>> free? Hm...
>
>We still have the protections introduced by the EU. Let's hope the government 
>aren't about to tear them up... oh, they are.

America has been slowly going fascist for coming up on 100 years now.  That's 
what happens when you let your government run your schools and teach most of 
the children that the government is staffed by saints who only want to keep 
kids out of the coal mines and prevent businesses from poisoning their 
customers.

But the incompetent bungling surrounding the whole COVID mess seems to have a 
lot of people waking up to the fact that the clowns in charge don't even know 
what parts of the economy actually are "essential", and are more interested in 
maintaining their own power and status than actually being helpful.

So we'll see what happens as the food and fuel shortages keep rolling in.  

LMP



RE: [gentoo-user] About to have fiber internet and need VPN info

2022-08-09 Thread Laurence Perkins
>-Original Message-
>From: Peter Humphrey  
>Sent: Monday, August 8, 2022 4:08 PM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] About to have fiber internet and need VPN info
>
>On Monday, 8 August 2022 17:25:08 BST Laurence Perkins wrote:
>
>> I would strongly recommend getting your own equipment with most ISPs 
>> since the rental price on their equipment will buy a new modem yearly.
>> 
>> Now, that said, the other ISP in the area also has a backdoor, and 
>> they try to connect to it any time they're running diagnostics, and if 
>> the connection fails then they reset the circuit.  So your choice is 
>> use one of their sanctioned models and leave the backdoor alone or 
>> else have sporadic dropouts.
>
>Life isn't like that in the UK - at least, I hope not. I'm not aware of any 
>such behaviour by my ISP.

I've heard a few, similar stories from your side of the pond.  So far as I can 
tell it's a matter of incompetence, not maliciousness.  99% of their customers 
can barely use a computer, and anything that goes wrong, even locally, will get 
blamed on whichever service department said customer thinks of first.

Were it me running the show I'd have the backdoor only be active when somebody 
has pressed the WPS button on the router.  That would seem a good compromise 
between privacy and service.  But lots of ISPs have standard methods they 
developed in the 90s when the Internet was slightly friendlier and they often 
don't have anyone competent to update their policies on staff.

LMP




RE: [gentoo-user] About to have fiber internet and need VPN info

2022-08-08 Thread Laurence Perkins
>-Original Message-
>From: Michael  
>Sent: Saturday, August 6, 2022 1:32 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] About to have fiber internet and need VPN info
>
>On Saturday, 6 August 2022 07:07:26 BST Dale wrote:
>> Michael wrote:
>> > All connections to banks are encrypted end-to-end for decades now 
>> > and the encryption has becoming stronger over the years.
>> 
>> That is likely true.  I still remember Snowden tho.  We don't know 
>> what backdoors are in use even for bank encryption.
>
>It's safer to assume state actors have full access to bank information.  The 
>hope is bad guys don't get access too!  ;-)
>

They have no reason to bother.  At least not in the USA.  US courts ruled 
decades ago that as soon as you give information to a third party you lose all 
expectation of privacy (yes, even if the third party promised privacy in the 
contract you have with them.)

Phone voice data and U.S. Mail are specifically protected legally, as are 
privileged communications with a lawyer, priest, or doctor (although that last 
category is so riddled with exceptions as to barely count).  Otherwise, anybody 
you do any business with at all can be forced to give up any and all records 
they have about you, no warrant required, and can be ordered not to tell you 
it's been done.

So government level actors spying on your banking just go to the bank.  And 
they've been getting more nosey in recent years.  Last I heard, any transaction 
over $600 gets automatically reported to them, and they keep talking about 
lowering that threshold.

LMP



RE: [gentoo-user] About to have fiber internet and need VPN info

2022-08-08 Thread Laurence Perkins



> -Original Message-
> From: Peter Humphrey  
> Sent: Friday, August 5, 2022 4:36 PM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] About to have fiber internet and need VPN info
> 
> On Saturday, 6 August 2022 00:05:20 BST Wol wrote:
> > On 05/08/2022 15:53, Laurence Perkins wrote:
> > > Oh, and note that if your ISP works the same way as mine, they have 
> > > a backdoor into whatever equipment they happen to have provided.  So 
> > > definitely put it in bridge mode or whatever and use your own gear 
> > > if you don't want them spying on the doings of your internal 
> > > network.  One of the local ISPs here even calls it a "feature" and 
> > > will ping the mobile app they require you to install to use their 
> > > service every time they detect "suspicious" traffic on your internal 
> > > net...
> > What do they do if you don't know how to use a mobile? (Yes that IS a 
> > serious question - I provide tech support to family like that :-)
> 
> ...or if you use your own equipment?
> 
> --
> Regards,
> Peter.
> 

I would strongly recommend getting your own equipment with most ISPs since the 
rental price on their equipment will buy a new modem yearly.

Now, that said, the other ISP in the area also has a backdoor, and they try to 
connect to it any time they're running diagnostics, and if the connection fails 
then they reset the circuit.  So your choice is use one of their sanctioned 
models and leave the backdoor alone or else have sporadic dropouts.

LMP



RE: [gentoo-user] About to have fiber internet and need VPN info

2022-08-08 Thread Laurence Perkins


>-Original Message-
>From: Wol  
>Sent: Friday, August 5, 2022 4:05 PM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] About to have fiber internet and need VPN info
>
>On 05/08/2022 15:53, Laurence Perkins wrote:
>> Oh, and note that if your ISP works the same way as mine, they have a 
>> backdoor into whatever equipment they happen to have provided.  So 
>> definitely put it in bridge mode or whatever and use your own gear if you 
>> don't want them spying on the doings of your internal network.  One of the 
>> local ISPs here even calls it a "feature" and will ping the mobile app they 
>> require you to install to use their service every time they detect 
>> "suspicious" traffic on your internal net...
>
>What do they do if you don't know how to use a mobile? (Yes that IS a serious 
>question - I provide tech support to family like that :-)
>
>
Their tech will set it up for you when he comes out to hook up the cables.  No, 
it's not been audited for security, so it could be doing anything at all to 
your phone.

LMP


RE: [gentoo-user] About to have fiber internet and need VPN info

2022-08-05 Thread Laurence Perkins
> -Original Message-
> From: Dale  
> Sent: Thursday, August 4, 2022 3:32 PM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] About to have fiber internet and need VPN info
> 
> Dale wrote:
> > Howdy,
> >
> > Fiber internet is right around the corner.  Some neighbors are already 
> > connected and they working their way to my area.  Once I get 
> > connected, I also want to use a VPN but only for some programs.  
> > Example, I want Ktorrent and a couple Firefox profiles to use VPNs but 
> > at least one Firefox profile I want to remain outside of VPN.  I 
> > watched a few videos but want to be sure I understand this right.  If 
> > I want software to use a VPN, I put the IP address of the VPN into the 
> > proxy settings of the program and that makes it use the VPN.  If I 
> > want it to not use the VPN, I leave the settings as they are now.  Am I 
> > understanding this correctly?
> >
> > Also, the package I'm getting is 500Mbs/sec.  What speeds should I 
> > really expect?  If memory serves me right, that is about 50MBs/sec, 
> > note the size of the B.  By the way, that is about 50 times faster 
> > than what I have now.  Also, up and down is the same.  Current up 
> > stream is a lot smaller.  Basically, I can upload files as fast as I 
> > download them.  Now I can upload videos or something.
> >
> > I'm looking at Surfshark and NordVPN.  Both seem to be good and at a 
> > decent price.  Anyone used one or both of these?
> >
> > Just trying to make sure I'm on the right path.
> >
> > Thanks.
> >
> > Dale
> >
> > :-)  :-)
> >
> > P. S.  Seamonkey is still not fetching emails automatically, I'm 
> > waiting on a upgrade to see if it gets fixed then.  If not, revive old 
> > thread and bring out the hammer.  ;-)
> >
> 
> 
> I now have fiber internet.  It was installed on Tuesday morning.  Speed test 
> shows it is awesome.  I downloaded a Knoppix iso file, about 4.4GBs worth, in 
> just about 5 minutes.  It would have taken about 9 hours on old DSL internet. 
>  I'm perfectly happy with that.
> 
> I also ordered a router that has openvpn installed on it.  I watched some 
> videos and think I can set it up to keep my traffic out of public view.  
> After I learned more about it, there's no reason to not use the VPN for all 
> traffic really.  That way I'm protected a little bit even if a website is not 
> secure.  May not help much but it may be enough. 
> 
> This is as big a difference as when I went from dial-up to DSL.  I can update 
> my OS in minutes as far as downloading goes.  Now it will be compiling that I 
> have to wait on.  Again.  ;-)
> 
> Now I know what some of you guys have been enjoying, and why. :-D 
> 
> Dale
> 
> :-)  :-) 
> 
> P. S.  Seamonkey still doesn't do email fetch automatically.  I may dig out 
> my mini sledge hammer.  :/ 
> 
> 

Most commercial VPN services are of limited value privacy-wise unless you use 
forged credentials to sign up and chain a couple of them together.

Otherwise you have only their word that they're not collecting exactly the same 
information your ISP would, and if you're worried about government spying you 
may as well paste a big sign saying "look here, this guy thinks he's being 
sneaky" all over your traffic.

The main place they're useful is if you want somebody a little bit more 
trustworthy than random open wifi hotspots to be the ones handling your data, 
or if you can get a VPN service provider in a country known not to cooperate 
with whoever the malicious actors are in your country.

With a full on fiber connection, as long as it doesn't violate your ISPs terms 
of use, if you want to hide what you're actually doing, proper configuration of 
tor and/or i2p will likely serve you better, and help other people in 
oppressive countries do the same.

That said, openvpn is still useful for point-to-point links between friends.  

Oh, and note that if your ISP works the same way as mine, they have a backdoor 
into whatever equipment they happen to have provided.  So definitely put it in 
bridge mode or whatever and use your own gear if you don't want them spying on 
the doings of your internal network.  One of the local ISPs here even calls it 
a "feature" and will ping the mobile app they require you to install to use 
their service every time they detect "suspicious" traffic on your internal 
net...

LMP


RE: [gentoo-user] Is it possible to make Gentoo LiveGUI ISO image GUI environment look like Windows 10/11?

2022-08-05 Thread Laurence Perkins
> 
> 
> -Original Message-
> From: Turritopsis Dohrnii Teo En Ming  
> Sent: Friday, August 5, 2022 12:51 AM
> To: gentoo-user@lists.gentoo.org
> Cc: c...@teo-en-ming-corp.com
> Subject: Re: [gentoo-user] Is it possible to make Gentoo LiveGUI ISO image 
> GUI environment look like Windows 10/11?
> 
> On Fri, 5 Aug 2022 at 15:40, Neil Bothwick  wrote:
> >
> > On Fri, 5 Aug 2022 15:30:37 +0800, Turritopsis Dohrnii Teo En Ming wrote:
> >
> > > I believe many people will find Linux easy to use (user friendly) if 
> > > the GUI environment looks like Windows 10/11.
> >
> > The problem with that is Gentoo is not supposed to be easy to use, it 
> > is a "power user"'s distro. Sucking someone in with the guise of being 
> > like Windows is only likely to put them off Linux once they read page 
> > 1 of the installation instructions.
> >
> > There are distros that aim to be easy to use for Windows users, Gentoo 
> > is not one of them.
> >
> >
> > --
> > Neil Bothwick
> >
> > Don't be humble, you're not that great.
> 
> Noted with thanks.
> 
> Regards,
> 
> Mr. Turritopsis Dohrnii Teo En Ming
> Targeted Individual in Singapore
> 
> 

Now, that said, if you're willing to do all the maintenance work for them to 
keep the system behaving the way they need it to, then yes.
My mother is half-blind and for years I kept her old machine running with a 
Linux OS configured to look like Windows XP because she did a lot based on 
color and screen position.
For the Windows 10 look you'll need a window manager that has a nice menu 
available.  Something with searching built in, and with good theming support.
Then you just need to sort out your custom theme.  Don't try to distribute it 
though, MS will likely get twitchy and sue you if you do too good a job.

Note that KDE already has a reasonably Windowsy feel to it.  A lot of users 
might be happy with that.  
Although, over the years I've noticed that as long as the menu search function 
will match on keywords as well as just exact program names that's usually 
enough.  The rest of the appearance details matter a lot less.

LMP


RE: [gentoo-user] gcc 12.1.1 compile error on loongson2f Yeeloong

2022-07-29 Thread Laurence Perkins
Turning on zswap or, better yet, setting up a zram swap with a file to back it 
helps quite a bit on swapping since a lot of the data in question is highly 
compressible.

Additionally, disable the "pipe" compiler option in make.conf so that it writes 
intermediate files out to disk instead of trying to keep them in memory.

And finally, if you have a more powerful machine sitting around, look into 
distcc and/or setting up a binhost.  You'll be much happier in the long-run.

LMP

-Original Message-
From: David M. Fellows  
Sent: Friday, July 29, 2022 7:50 AM
To: gentoo-user@lists.gentoo.org; johnstrass 
Subject: Re: [gentoo-user] gcc 12.1.1 compile error on loongson2f Yeeloong

>Dear friends, I am updating the gentoo mip-n32 system on my Yeeloong 
>netbook. During updating of the @world, it began to update the gcc
>12.1.1 from gcc 11.3.0 which is also the one doing the compilation. And 
>after a long-time compilation, it stops with error messages: " ..

Hmm. 1G ram and 1G swap is not much memory for building gcc.
You could check dmesg and syslog for out-of memory kill messages occurring 
during your builds.

It would be easy to try again specifying MAKEOPTS="-j1".  Also make sure that 
you have no unnecesary other processes running during the build - especially 
memory hogs like a web browser.

You could add swap, but if you are doing much swapping it will be intolerably 
slow.

I would also arrange for some extra cooling of the laptop just in the load is 
causing overheating.

DaveF

>
>
>rm gfdl.pod gcc.pod gcov-dump.pod gcov-tool.pod fsf-funding.pod gpl.pod 
>cpp.pod gcov.pod lto-dump.pod make[2]: *** [Makefile:5000:
>all-stage2-gcc] Error 2 make[1]: *** [Makefile:22474: stage2-bubble] 
>Error 2 make: *** [Makefile:22697: bootstrap-lean] Error 2 * ERROR:
>sys-devel/gcc-12.1.1_p20220625::gentoo failed (compile phase): * emake 
>failed * * If you need support, post the output of `emerge --info 
>'=sys-devel/gcc-12.1.1_p20220625::gentoo'`, * the complete build log 
>and the output of `emerge -pqv '=sys-devel/gcc-12.1.1_p20220625::gentoo'`.
>* The complete build log is located at
>'/var/tmp/portage/sys-devel/gcc-12.1.1_p20220625/temp/build.log'.
>* The ebuild environment file is located at 
>'/var/tmp/portage/sys-devel/gcc-12.1.1_p20220625/temp/environment'.
>* Working directory:
>'/var/tmp/portage/sys-devel/gcc-12.1.1_p20220625/work/build' * S:
>'/var/tmp/portage/sys-devel/gcc-12.1.1_p20220625/work/gcc-12-20220625'
>* Please include
>/var/tmp/portage/sys-devel/gcc-12.1.1_p20220625/work/gcc-build-logs.tar.
>bz2 in your bug report. *
>
>
 Failed to emerge sys-devel/gcc-12.1.1_p20220625, Log file:
>
>
 '/var/tmp/portage/sys-devel/gcc-12.1.1_p20220625/temp/build.log' " 
 I
>also tried to emerge 11.3.1, but it still stoped at the compile stage.
>But I forgot to save the enviroment.
>
>
>Any help? The necessary log files are attached. Thanks.
>>> application/x-gzip attachment, name=buildlogs-12.1.1.tar.gz 
>>> application/x-gzip attachment, name=buildlogs-11.3.1.tar.gz




RE: [gentoo-user] MAC whitelisting and UDP traffic.

2022-07-14 Thread Laurence Perkins

>>
>> From: Adam Carter adamcart...@gmail.com<mailto:adamcart...@gmail.com>
>> Sent: Tuesday, July 12, 2022 11:13 PM
>> To: Gentoo User 
>> gentoo-user@lists.gentoo.org<mailto:gentoo-user@lists.gentoo.org>
>> Subject: Re: [gentoo-user] MAC whitelisting and UDP traffic.
>>
>>
>> On Tue, Jul 12, 2022 at 7:35 AM Laurence Perkins 
>> lperk...@openeye.net<mailto:lperk...@openeye.net> wrote:
>> Ok, I asked a while ago about whitelisting MAC ranges for firewall rules, 
>> and just so you know, adding 16 million potential MAC addresses to the 
>> firewall…  Doesn’t work well…  No matter how you do it.  So I had to write a 
>> daemon to monitor which ones were local and add just those.  Whatever.
>>
>> That brings me to the next problem.  The routing and NAT work just fine if 
>> I’m letting everything through.  But if I’m dropping connections that don’t 
>> come from authorized devices then UDP only works in the outbound direction…  
>> TCP is fine.
>>
>> For reference, the rules consist of:
>>
>> iptables -t nat -I POSTROUTING -o  -j MASQUERADE
>> iptables -A FORWARD -i   -o  -m state --state 
>> RELATED,ESTABLISHED -j ACCEPT
>>
>> And then the daemon adds a:
>> iptables -A FORWARD -i  -o  -m mac --mac-source > ADDRESS> -j ACCEPT
>>
>> for each authorized device.
>>
>> TCP works perfectly.
>> UDP based protocols send out just fine, but any replies get blocked if the 
>> FORWARD chain’s default policy is  DROP.
>>
>> Now…  Everything I’m reading says that it’s supposed to be able to associate 
>> UDP replies based on port number, which indeed it must be doing in order for 
>> them to get translated correctly and directed to the correct device inside 
>> the NAT when the default policy is ACCEPT.
>>
>> So why is that rule to accept related packets not triggering for them?
>
> I also would have expected the UDP replies to be permitted via -state 
> RELATED,ESTABLISHED.
>
> Do they at least get into the state table;
> grep udp /proc/net/nf_conntrack
>

Well, they had been all getting in that list for the outbound packets, but then 
never noting that replies had come...

This morning when I went to pull a few lines for this reply, they're there and 
marked as ASSURED.

And yet, of the four devices attached to the inside for testing purposes, only 
one of them has established its OpenVPN connection.

So I'm going to set up some more detailed tests and some additional hardware 
arrangements and see if I can get more specific about what does and does not 
connect properly.

LMP


[gentoo-user] MAC whitelisting and UDP traffic.

2022-07-11 Thread Laurence Perkins
Ok, I asked a while ago about whitelisting MAC ranges for firewall rules, and 
just so you know, adding 16 million potential MAC addresses to the firewall...  
Doesn't work well...  No matter how you do it.  So I had to write a daemon to 
monitor which ones were local and add just those.  Whatever.

That brings me to the next problem.  The routing and NAT work just fine if I'm 
letting everything through.  But if I'm dropping connections that don't come 
from authorized devices then UDP only works in the outbound direction...  TCP 
is fine.

For reference, the rules consist of:

iptables -t nat -I POSTROUTING -o  -j MASQUERADE
iptables -A FORWARD -i   -o  -m state --state 
RELATED,ESTABLISHED -j ACCEPT

And then the daemon adds a:
iptables -A FORWARD -i  -o  -m mac --mac-source  
-j ACCEPT

for each authorized device.

TCP works perfectly.
UDP based protocols send out just fine, but any replies get blocked if the 
FORWARD chain's default policy is  DROP.

Now...  Everything I'm reading says that it's supposed to be able to associate 
UDP replies based on port number, which indeed it must be doing in order for 
them to get translated correctly and directed to the correct device inside the 
NAT when the default policy is ACCEPT.

So why is that rule to accept related packets not triggering for them?  And is 
there a standard way to do this?  Because everything I can dig up on Google is 
examples just of routing with no NAT involved and nobody reporting this problem 
anywhere.

I'll keep playing with it, but at this point I'm just throwing random stuff 
against the wall, so any hints would be appreciated.

LMP


RE: [gentoo-user] continuing an emerge

2022-07-07 Thread Laurence Perkins
> 
> 
> -Original Message-
> From: Neil Bothwick  
> Sent: Wednesday, July 6, 2022 7:05 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] continuing an emerge
> 
> On Wed,  6 Jul 2022 15:48:04 +0200 (CEST), k...@aspodata.se wrote:
> 
> > With nodejs, I get:
> > 
> > /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../../x86_64-pc-linux-gnu/bin/ld:
> > failed to set dynamic section sizes: memory exhausted
> > 
> > so, instead of redoing the whole compile, can I, after reducing 
> > current memory usage, set up emerge to retry the linking and continue 
> > from there ?
> 
> ebuild /path/to/ebuild merge
> 
> should do it.
> 
> 
> --
> Neil Bothwick
> 
> In the 60's people took acid to make the world weird.
> Now the world is weird and people take Prozac to make it normal.
>

You can also set FEATURES="keepwork keeptemp" for the one, specific emerge 
command.  Setting them permanently is not generally recommended and emerge will 
tell you so.

LMP



RE: [gentoo-user] pain, PAIN, and more pain again.

2022-06-20 Thread Laurence Perkins
I know how it goes.  But do be fair, it was the motherboard manufacturers who 
changed how the boot process works, not the penguins.    The penguins are just 
trying to adapt to it.

I'm not sure where your problems are coming from.  I use CUPS by IP address in 
my setup.  You just need to also use static IP addresses for all your printers, 
etc.  Or else things may randomly stop working when your router reboots.

As for your update woes, I handle it by using a setup that supports 
snapshottable subvolumes.  I put my personal files on one, and the OS on 
another.  Then, for updates, you take a snapshot first, and if it all goes 
kablooey you can easily roll back while you sort out the issue.  For bonus 
points you can update the snapshot and then switch to it and delete the 
original only once you have everything working the way you like it again.

In this case though, I think your trouble comes from trying to install a 
version of Chromium that's still in development.  That's why it was masked.  
Masks like that usually indicate that there's something horribly wrong with it 
and you shouldn't use it unless there's no choice.  Try 103 maybe?  The betas 
usually work.

LMP

-Original Message-
From: Alan Grimes  
Sent: Saturday, June 18, 2022 8:54 PM
To: Gentoo User 
Subject: [gentoo-user] pain, PAIN, and more pain again.

CAUTION: This is an EXTERNAL email. Do not click links or open attachments 
unless you recognize the sender and know the content is safe.

I probably owe you guys an appology which is not forthcoming, at least not 
tonight for obvious reasons. Instead, lets me explain my problem solving 
procedure.


At this point in time, if you have a problem, especially on linux, it is almost 
never the problem that existed before a penguin tried to solve it for you. 
Instead, the problem you are facing is the problem caused when either a penguin 
or a whole flock of penguins sanctamoniously decided that you needed a solution 
and that this solution would be installed on your system without your 
intervention, or knowledge or concent.
Occasionally they get away wtith this and your life gets a little easier and 
you are none the wiser. Usually, however, their solution to the thing that 
wasn't really bothering you in the first place fails spectacularly and spoils 
your whole week.

Ok, what are your options?

A: Figure out what brand of dope the penguin was on and what dose, ratio of 
dope to vodka, and get on precisely the same prescription to get in just the 
right mindset to try to understand how their crappy software was supposed to 
work so you can fix it.

B. Exterminate the penguin's software from your computer so that you can 
experience the underlying problem in its rawest and most brutal form and find 
that it's not even worth solving or that it admits an utterly trivial solution 
that, at least, makes sense to you and that you won't have any trouble 
maintaining for the long haul.


For me, choice A is always always always wrong. The computer is supposed to 
work for you, not the other way around. If you are spending hours, or even 
weeks of your rotten miserable life chasing down obscure answers to questions 
you shouldn't even have, then you are doing computing wrong and need to stop, 
take two steps back, and figure out what your real problem is.

I get angry when Linux does not let me do that. =|

Instead, the penguins seem obsessed with inventing more and more garbage that I 
need to manage.

Example:

Old way:

"My boot drive is plugged into this port on the motherboard"

New way:

Spend hours figuring out what your UUID is, create a physical pocket folder 
(which you will subsequently have to store and manage) with the UUID which is 
long and complex and copy it by hand, very carefully, then set that up in your 
mtab

Example:

Old way: "My network printer is at this IP address"

New way:

Master a list of 5-6 obscure and arcane packages that let you assign "human 
friendly" network names to devices and then get all those packages working with 
each other so you can print. Yeah, it looks more like christmass tree wiring 
than a solution to a problem, You'll be doing it again from scratch next month 
when we decide to change it again for no reason and No, you can't print using 
the old way.

Ie, the printer I spent $400 on so that I could print from anywhere in my house 
only works with my windows computer because I made the mistake of updating CUPS.


It's only been 3 months innce I updated last so therefore I'm hurting BAD 
tonight. I had to update the hack I used last time to get around the libicuuc 
fuckup by implementing the same hack again but version bumped... (symlink 
1.71.1 to 1.70)... It seemed gung ho about python
3.11 but it turned out that 3.11 is still beta and that I should ignore it.

The maintainers of steam overlay seem to have given up, so I used layman to -d 
it and now I get !!! Invalid PORTDIR_OVERLAY (not a dir): 
'/var/lib/layman/steam-overlay'
each 

RE: [gentoo-user] Reinstall

2022-06-20 Thread Laurence Perkins
Be careful about the binpackages if you add any instruction set optimizations 
to your make.conf.  Trying to install incompatible binaries will wreck your 
whole day.

Otherwise, I usually just save the world file and any USE flags that I remember 
setting on purpose.  Save copies of the keywords and whatnot, but it's also a 
good opportunity to clean out the cruft that always slips in over time.

Distfiles can be good too so you don't have to download everything again.

If you're moving everything to a new computer, you can also just try a straight 
copy of everything...  As long as your instruction sets are compatible it goes 
pretty quickly.  I've got an install that's on its third set of hardware now...

LMP

-Original Message-
From: Francisco Ares  
Sent: Sunday, June 19, 2022 9:24 AM
To: gentoo-user 
Subject: Re: [gentoo-user] Reinstall

Good day!

First of all, sorry for the late reply. I had to attend to a course a bit far 
away from home and it was pretty intense.

Now, back home, I'll try the suggestions, thanks!

Just for the sake of preventing a future failure, besides personal files 
(minimum and obvious) the "world" file and the binary packages, built along 
with the package installation, what else should I backup so that I would be 
able to quickly restore the same full working Gentoo in a new hardware without 
having to work from stage3 up? The portage tree is one of those items, for 
sure. But what else?

Thank you all and a good sunday.
Francisco



RE: [gentoo-user] netfilter partial MAC filtering

2022-06-17 Thread Laurence Perkins


> -Original Message-
> From: Samuraiii  
> Sent: Thursday, June 16, 2022 9:48 PM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] netfilter partial MAC filtering
> 
> On Fri, 2022-06-17 at 01:32 +0000, Laurence Perkins wrote:
> > I am designing a small system with a switch and an uplink.  It needs 
> > to be able to forward traffic from trusted, and only trusted, devices 
> > connected to the switch out through the uplink.
> >  
> > Since all potential trusted devices will have the same MAC OUI prefix 
> > in this case, the immediately obvious course of action would be to 
> > base the decision on that.
> >  
> > Unfortunately, there doesn't seem to be a good way to do so.  There 
> > was
> > https://serverfault.com/questions/877576/shorewall-wildcard-filter-
> > by-source-mac-address from a few years ago, with the answer being "You 
> > can't."
> >  
> > While I didn't bother to test it, I'm guessing that adding about 16 
> > million MAC filtering rules to the firewall won't be good for 
> > performance.  I briefly thought I could use the string matching or the 
> > U32 filters, but unfortunately it appears that they can't access 
> > anything prior to the start of the IP section, so picking bytes out of 
> > the ethernet header isn't possible.
> >  
> > I did find
> > https://martin.uy/blog/wildcard-support-for-mac-addresses-in-netfilter
> > -linux-kernel-and-iptables/
> >But it's old, and has something of a glaring flaw with regard to 
> > false wildcard matches.
> >  
> > I can think of a few ways to do this, mostly involving somehow 
> > monitoring incoming packets and noting the MAC addresses which have 
> > the correct prefix, and then having a little daemon pick up those 
> > addresses and add rules to let them through.
> >  
> > Either that, or try to write a custom netfilter module.
> >  
> > None of this seems particularly "fun" to sort out.  Does anybody know 
> > of any common solutions for doing packet matching based on just part 
> > of a MAC address on Linux?  Failing that, some advice about whether 
> > the system daemon and packet inspection route or the netfilter module 
> > route is more likely to be stable and maintainable would be 
> > appreciated.
> >  
> > Thanks,
> > LMP
> Hi,
> I would recommend to look into nftables and its set feature...
> It should perform better with one rule for multiple matches.
> I bet no one had tried it with 16M items, but it is the best, as far as I 
> know.
> Cheers
> S
> 
> 
> https://wiki.nftables.org/wiki-nftables/index.php/Sets
> https://developers.redhat.com/blog/2017/04/11/benchmarking-nftables#the_first_test
> 
I guess it's worth a shot.  If it has enough intelligence to collapse the 
search list so it only has to look for the prefix once then it would at least 
reject forbidden connections quickly...

I'll generate out the whole list and see what happens and let you guys know.

LMP


[gentoo-user] netfilter partial MAC filtering

2022-06-16 Thread Laurence Perkins
I am designing a small system with a switch and an uplink.  It needs to be able 
to forward traffic from trusted, and only trusted, devices connected to the 
switch out through the uplink.

Since all potential trusted devices will have the same MAC OUI prefix in this 
case, the immediately obvious course of action would be to base the decision on 
that.

Unfortunately, there doesn't seem to be a good way to do so.  There was 
https://serverfault.com/questions/877576/shorewall-wildcard-filter-by-source-mac-address
 from a few years ago, with the answer being "You can't."

While I didn't bother to test it, I'm guessing that adding about 16 million MAC 
filtering rules to the firewall won't be good for performance.  I briefly 
thought I could use the string matching or the U32 filters, but unfortunately 
it appears that they can't access anything prior to the start of the IP 
section, so picking bytes out of the ethernet header isn't possible.

I did find 
https://martin.uy/blog/wildcard-support-for-mac-addresses-in-netfilter-linux-kernel-and-iptables/
   But it's old, and has something of a glaring flaw with regard to false 
wildcard matches.

I can think of a few ways to do this, mostly involving somehow monitoring 
incoming packets and noting the MAC addresses which have the correct prefix, 
and then having a little daemon pick up those addresses and add rules to let 
them through.

Either that, or try to write a custom netfilter module.

None of this seems particularly "fun" to sort out.  Does anybody know of any 
common solutions for doing packet matching based on just part of a MAC address 
on Linux?  Failing that, some advice about whether the system daemon and packet 
inspection route or the netfilter module route is more likely to be stable and 
maintainable would be appreciated.

Thanks,
LMP


RE: [Possible Malware Fraud]Re: [gentoo-user] PySimpleBuild + MariaDB C Client

2022-06-13 Thread Laurence Perkins
Xbx shows up every once in a while, seems to be a young kid who doesn’t read 
English terribly well or know how mailing lists work and isn’t subscribed so 
doesn’t get list replies.

The couple I’ve picked apart didn’t seem malicious, just not actually necessary 
or particularly useful.  But I’d hate to discourage someone who’s trying to 
learn.

Of course, that could be a double-bluff of some kind just waiting for us to 
trust him.  Who knows.

LMP

From: coa...@tuta.io 
Sent: Saturday, June 11, 2022 12:21 AM
To: gentoo-user@lists.gentoo.org
Subject: [Possible Malware Fraud]Re: [gentoo-user] PySimpleBuild + MariaDB C 
Client



WARNING: Your email security system has determined the message below may be a 
potential threat.

It may trick victims into clicking a link and downloading malware. Do not open 
suspicious links.

If you do not know the sender or cannot verify the integrity of the message, 
please do not respond or click on links in the message. Depending on the 
security settings, clickable URLs may have been modified to provide additional 
security.



Dear xbx7336,

I'd like to thank you for sharing this little program with us,nevertheless I 
need you to understand that you just posted an unidentifiable zip package in a 
security and simplicity and modular linux distro mailchain, bypassing the fact 
you just used a .zip rather .tar.* I feel like I should state that simplicity 
of usage, security of though and modularity are all reasons >>Git<<
 exists.

I don't know what PySimpleBuild is but MariaDB has been released under a GPL 
license so there should NOT be a reason not to post your "zip's" contents onto 
a github, gitlab or any other third party or your own git instance.

Lets not even mention the security risks of downloading something from any 
(even trusted) email due to any and all email protocols having weaker security 
than a tamagotchi.

Jun. 7, 2022, 11:37 by xbx7...@gmail.com:
Dear All!

I would like to share this little program.

xbx.



RE: [gentoo-user] Re: Searching the list archives

2022-06-13 Thread Laurence Perkins



>-Original Message-
>From: Peter Humphrey  
>Sent: Sunday, June 12, 2022 2:17 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] Re: Searching the list archives
>
>On Sunday, 12 June 2022 09:04:30 BST Nuno Silva wrote:
>> On 2022-06-12, Peter Humphrey wrote:
>> > Hello list,
>> > 
>> > Does any site out there offer a search function over its whole archive?
>> > Going through one month at a time is going to take for ever.
>> 
>> I think the search feature at marc.info goes through the entire 
>> archive at once:
>> 
>> https://marc.info/?l=gentoo-user
>
>Certainly does - thanks Nuno.
>
>--
>Regards,
>Peter.
>
>
Now if only there were a way for people to find this message without having to 
search for it a month at a time...  :P

LMP



RE: [gentoo-user] *sob*

2022-06-13 Thread Laurence Perkins





>-Original Message-
>From: Neil Bothwick  
>Sent: Sunday, June 12, 2022 4:28 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] *sob*
>
>On Sun, 12 Jun 2022 00:42:56 -0400, Alan Grimes wrote:
>
>> All of the followning statemens are true:
>> 
>> 1. I am never not running seamonkey, it is always open at all times on 
>> my left hand monitor, If at any point it was not on my left hand 
>> monitor either I must have been going through a power failure or 
>> something had broken it, because it is working at the moment I have 
>> decided not to use emerge update for the next six months, two months 
>> in to that period at this point.
>> 
>> 2. Seamonkey is my default browser.
>> 
>> 3. Seamonkey cannot be launched twice, ie it cannot be launched on my 
>> right hand monitor because the monitors are using separte X servers 
>> and it would need to be launched twice to come up on that monitor.
>
>The last statement is not quite true. It cannot be launched twice with the 
>same profile. This was the first hit when googling "seamonkey launch two 
>instances"
>
>http://kb.mozillazine.org/Opening_a_new_instance_of_your_Mozilla_application_with_another_profile
>
>Is there a particular reason for running two X servers rather than one desktop 
>on two screens? The latter completely avoids the problem you are having.
>
>
>--
>Neil Bothwick
>
>If God can't help you, how about Mr. Coffee?

I can think of a few ways around this mess actually...

1.  Use one X server for both screens, and should you happen to need a separate 
X server to contain something use xnest, xephyr, or a headless VNC server to 
run another X that you can have there as a window.
2.  Since you're using two X instances, and therefore two instances of your 
window manager, etc., create two profiles for all that which use separate 
seamonkey profile folders for their default browser.
3.  Go all the way and run your two X servers as two completely different user 
accounts.  I presume you're already using separate keyboard and mouse for each 
anyway since I'm not sure how it would work otherwise.
4.  Update your default browser string to explicitly set which X display to 
talk to.  Then it will open on your left monitor no matter where it's launched 
from.
5.  You can't really get away from having a default browser, and you probably 
don't want to since it'll break other things later, but you could make it 
launch a wrapper script that looks at what display it's on and behaves 
accordingly.
6.  There used to be something called "libteleport" that would let you transfer 
X programs from one display to another.  It was rarely used because programs 
had to be built against it, and most binary distros didn't...  But on Gentoo 
that should theoretically be easy...
7.  Several other browsers will automatically open new windows on whatever 
display their running instance is on (even over SSH...  long story...)  Since 
you don't want to switch to one of those you could port that functionality to 
Seamonkey.

I've probably missed a few options, that's just what I can think of off the top 
of my head.  

LMP



RE: [gentoo-user] chromium shortcuts?

2022-05-31 Thread Laurence Perkins
Compile time for Chromium currently seems to run about four hours on an i7-9700.

-Original Message-
From: Jorge Almeida  
Sent: Sunday, May 29, 2022 2:26 PM
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] chromium shortcuts?

On Sun, May 29, 2022 at 10:16 PM w...@op.pl  wrote:
>
> If non-free software is not of any concern for you, then maybe try 
> Vivaldi browser? It is based on the same engine and feels quite like 
> chromium but with many many more options (USE=widevine for netflix ;) 
> )
>
I have no concerns regarding licensing et al., but I once tried Vivaldi and was 
not impressed (I don't remember why). I'm into chrom* only because it provides 
a smooth experience re youtube & gmail. I don't think other browsers can do it, 
but I wouldn't mind being wrong.

BTW: chromium also has a "widevine" flag. I don't know how much time does it 
take to compile chromium nowadays, but I don't have netflix anyway :(



RE: [gentoo-user] Reinstall

2022-05-11 Thread Laurence Perkins
> -Original Message-
> From: Neil Bothwick  
> Sent: Wednesday, May 11, 2022 11:35 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Reinstall
> 
> On Wed, 11 May 2022 16:45:31 +0000, Laurence Perkins wrote:
> 
> > Alternatively, fully update the system before putting in your world 
> > file, and then instead of copying in the world file all at once just 
> > run a loop to emerge the lines in it one at a time.
> 
> Now you mention it, that's what I did last time, although my loop emerge ten 
> lines at a time to cut down on the number of dependency recalculations.
> 
Well, and have it check the exit status and spit any lines that fail into 
another file to try again later.  Then shuffle the new file so the bad ones 
move around, or do the second pass one at a time.

LMP



RE: [gentoo-user] Reinstall

2022-05-11 Thread Laurence Perkins
And sometimes if you use --binpkg-respect-use=n and/or --with-bdeps=n you can 
jostle it into using more of the binaries on both passes.

Additionally, you can use the ebuild command directly to force it to just 
install things without checking all the dependencies, that's sometimes handy 
for breaking cycles too.

Do pay careful attention to the merge order though.  Make sure any updates to 
glibc happen first or else you'll wedge your system pretty badly.  Having 
static-compiled busybox installed as a backup is often a good idea.

Alternatively, fully update the system before putting in your world file, and 
then instead of copying in the world file all at once just run a loop to emerge 
the lines in it one at a time.

LMP

-Original Message-
From: David Palao  
Sent: Wednesday, May 11, 2022 5:26 AM
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] Reinstall

Hi,

What I would suggest is to try yo emerge @world first with a reduced list of 
USE flags, maybe the default, and after success you could introduce back the 
wanted USE flags and emerge @world once more.

It could be a bit too much compilation, but if you have already binary 
packages, it will not be so expensive the second round, IMHO.

Best

On 11/5/22 11:57, Francisco Ares wrote:
> Hello
>
> After a main HD failure, I'll have to reinstall Gentoo from almost 
> zero - I have a full and recent copy of the /etc directory and the 
> file /var/lib/portage/world in a secondary HD (along many personal 
> backups).
>
> Installation basics done, now it is time for an emerge world.
>
> Although the emerge lists is as huge as expected, it doesn't even 
> start, portage says there are cyclic USE flags that I should avoid at 
> the first moment, but may restore afterwards.
>
> But it doesn't say which are those USE flags that block each other.
>
> Is there any way to find those better than brute force?
>
> By the way, I also have a copy of all binary packages (I always use 
> the -b flag while emerging any package) in that second disk. But that 
> didn't help so far, even trying to use the -K flag. I thought on 
> un-tar'ing those binary packages by hand, but portage will be unaware 
> of this, not knowing the packages are installed.
>
> Any hint will be greatly appreciated!
>
> Thanks
>
> Francisco



RE: [gentoo-user] doing gentoo install disrupted

2022-04-14 Thread Laurence Perkins

> -Original Message-
> From: Dale  
> Sent: Thursday, April 14, 2022 8:29 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] doing gentoo install disrupted
> 
> Jude DaShiell wrote:
> > I got down to emerge-webrsync and discovered an error in 
> > portage/make.conf on line 21 non-existent server then emerge-webrsync 
> > tried to get a time stamp and then speakup died.
> > I have a single ssd slot in this machine and may be able to attach the 
> > gentoo ssd to the computer by usb so I can recover contents of my 
> > make.conf file and find out what happened in there.  It will be a long 
> > way back to chroot before I can try this again and that's not 
> > happening today due to space weather.  The computer will be off until 
> > tomorrow and disconnected.
> >
> >
> >
> 
> 
> You can pick up where you left off at pretty much any point in a Gentoo 
> install.  If you have to shutdown or unmount your install, just follow the 
> mounting and chroot sections again.  Once you have that done, pick up where 
> you left off.  In the past, I've had to do that before and it does fine as 
> long as you remember where you left off. 
> 
> Hope that helps.
> 
> Dale
> 
> :-)  :-) 
> 
Even if you don't remember where you left off, just check if each step has 
already been done before you do it.

And unpacking the stage3 I think is the only one likely to actually hurt 
anything if you do it again.  The others would just waste a little time is all.

LMP


RE: [gentoo-user] Hard drive error from SMART

2022-04-12 Thread Laurence Perkins
>-Original Message-
>From: Wol  
>Sent: Tuesday, April 12, 2022 2:51 PM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] Hard drive error from SMART
>
>On 12/04/2022 20:41, Laurence Perkins wrote:
>> LVM is good for being able to swap out drives easily but with the modern, 
>> huge drives you really want data checksums if you can get them.  Otherwise 
>> all it takes is a flipped bit somewhere to wreck your data and drive 
>> firmware doesn't always notice.  I think you can do that with LVM, but I've 
>> never looked into it for certain.
>
>Look at that link for my system that I posted. I use dm-integrity, so a 
>flipped bit will trigger a failure at the raid-5 level and recover.
>
>For those people looking at btrfs - note that parity-raid (5 or 6) is not a 
>wise idea at the moment so you don't get two-failure protection ...

Specifically if the system crashes or has a power failure there may be some 
data left hanging until it can complete a scrub.  Disk failures during that 
period may lose some of said data.

How much of a risk that is depends on the stability of your power and kernel 
and how much data turnover you have.  I only use it on systems with UPS power 
and additional backups.  Needs careful monitoring of the drives too since 
system crashes due to drive failures can leave you in rather a sticky mess.

>
>Cheers,
>Wol
>
>


RE: [gentoo-user] Hard drive error from SMART

2022-04-12 Thread Laurence Perkins
>-Original Message-
>From: Dale  
>Sent: Tuesday, April 12, 2022 11:22 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] Hard drive error from SMART
>
>Laurence Perkins wrote:
>>> -Original Message-
>>> From: Dale 
>>> Sent: Tuesday, April 12, 2022 10:08 AM
>>> To: gentoo-user@lists.gentoo.org
>>> Subject: Re: [gentoo-user] Hard drive error from SMART
>>>
>>> Rich Freeman wrote:
>>>> On Mon, Apr 11, 2022 at 9:27 PM Dale  wrote:
>>>>> Thoughts.  Replace as soon as drive arrives or wait and see?
>>>>>
>>>> So, first of all just about all my hard drives are in a RAID at this 
>>>> point, so I have a higher tolerance for issues.
>>>>
>>>> If a drive is under warranty I'll usually try to see if they will 
>>>> RMA it.  More often than not they will, and in that case there is 
>>>> really no reason not to.  I'll do advance shipping and replace the 
>>>> drive before sending the old one back so that I mostly have 
>>>> redundancy the whole time.
>>>>
>>>> If it isn't under warranty then I'll scrub it and see what happens.
>>>> I'll of course do SMART self-tests, but usually an error like this 
>>>> won't actually clear until you overwrite the offline sector so that 
>>>> the drive can reallocate it.  A RAID scrub/resilver/etc will 
>>>> overwrite the sector with the correct contents which will allow this to 
>>>> happen.
>>>> (Otherwise there is no way for the drive to recover - if it knew 
>>>> what was stored there it wouldn't have an error in the first place.)
>>>>
>>>> If an error comes back then I'll replace the drive.  My drives are 
>>>> pretty large at this point so I don't like keeping unreliable drives 
>>>> around.  It just increases the risk of double failures, given that a 
>>>> large hard drive can take more than a day to replace.  Write speeds 
>>>> just don't keep pace with capacities.  I do have offline backups but 
>>>> I shudder at the thought of how long one of those would take to restore.
>>>>
>>>
>>> Sadly, I don't have RAID here but to be honest, I really need to have it 
>>> given the data and my recent luck with hard drives.  Drives used to get 
>>> dumped because they were just to small to use anymore.  Nowadays, they seem 
>>> to break in some fashion long before their usefulness ends their lives. 
>>>
>>> I remounted the drives and did a backup.  For anyone running up on 
>>> this, just in case one of the files got corrupted, I used a little 
>>> trick to see if I can figure out which one may be bad if any.  I took 
>>> my rsync commands from my little script and ran them one at a time 
>>> with --dry-run added.  If a file was to be updated on the backup that 
>>> I hadn't changed or added, I was going to check into it before 
>>> updating my backups.  It could be that the backup file was still good 
>>> and the file on my drive reporting problems was bad.  In that case, I 
>>> would determine which was good and either restore it from backups or 
>>> allow it to be updated if needed.  Either way, I should have a good 
>>> file since the drive claims to have fixed the problem.  Now let us 
>>> pray.  :-D
>>>
>>> Drive isn't under warranty.  I may have to start buying new drives from 
>>> dealers.  Sometimes I find drives that are pulled from systems and have 
>>> very few hours on them.  Still, warranty may not last long.  Saves a lot of 
>>> money tho. 
>>>
>>> USPS claims drive is on the way.  Left a distribution point and should 
>>> update again when it gets close.  First said Saturday, then said Friday.  I 
>>> think Friday is about right but if the wind blows right, maybe Thursday. 
>>>
>>> I hope I have another port and power cable plug for the swap out.  At 
>>> least now, I can unmount it and swap without a lot of rebooting.  
>>> Since it's on LVM, that part is easy.  Regretfully I have experience 
>>> on that process.  :/
>>>
>>> Thanks to all. 
>>>
>>> Dale
>>>
>>> :-)  :-)
>>>
>>>
>> You can get up to 16X SATA PCI-e cards these days for pretty cheap.  So as 
>> long as you have the power to run another drive or two there's not much 
>> reason not to do RAID on the important stuff.  Also, the SATA protocol 
>> allows for port expanders, whi

RE: [gentoo-user] Hard drive error from SMART

2022-04-12 Thread Laurence Perkins


> -Original Message-
> From: Frank Steinmetzger  
> Sent: Tuesday, April 12, 2022 10:39 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Hard drive error from SMART
> 
> 
> I actually developed a tool for that. It creates and checks md5 checksums 
> recursively and *per directory*. Whenever I copy stuff from somewhere, like a 
> music album, I do an immediate md5 run on that directory. And when I later 
> copy that stuff around, I simply run the tool again on the copy (after the FS 
> cache was flushed, for example by unmounting and remounting) to see whether 
> the checksums are still valid.
> 
> You can find it on github: https://github.com/felf/dh It’s a single-file 
> python application, because I couldn’t be bothered with the myriad ways of 
> creating a python package. ;-)
> 
> --
> Grüße | Greetings | Salut | Qapla’
> Please do not share anything from, with or about me on any social network.
> 
> A horse comes into a bar.
> Barkeep: “Hey!”
> Horse:   “Sure.”
> 
There's also app-crypt/md5deep

Does a number of hashes, is threaded, has options for piecewise hashing and a 
matching mode for using the hashes to find duplicates.  Also a number of input 
and output filters for those cases where you don't want to hash everything.

Also can output a number of formats, but reformatting is generally trivial.

LMP


RE: [gentoo-user] Hard drive error from SMART

2022-04-12 Thread Laurence Perkins
> -Original Message-
> From: Dale  
> Sent: Tuesday, April 12, 2022 10:08 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Hard drive error from SMART
> 
> Rich Freeman wrote:
> > On Mon, Apr 11, 2022 at 9:27 PM Dale  wrote:
> >> Thoughts.  Replace as soon as drive arrives or wait and see?
> >>
> > So, first of all just about all my hard drives are in a RAID at this 
> > point, so I have a higher tolerance for issues.
> >
> > If a drive is under warranty I'll usually try to see if they will RMA 
> > it.  More often than not they will, and in that case there is really 
> > no reason not to.  I'll do advance shipping and replace the drive 
> > before sending the old one back so that I mostly have redundancy the 
> > whole time.
> >
> > If it isn't under warranty then I'll scrub it and see what happens.
> > I'll of course do SMART self-tests, but usually an error like this 
> > won't actually clear until you overwrite the offline sector so that 
> > the drive can reallocate it.  A RAID scrub/resilver/etc will overwrite 
> > the sector with the correct contents which will allow this to happen.
> > (Otherwise there is no way for the drive to recover - if it knew what 
> > was stored there it wouldn't have an error in the first place.)
> >
> > If an error comes back then I'll replace the drive.  My drives are 
> > pretty large at this point so I don't like keeping unreliable drives 
> > around.  It just increases the risk of double failures, given that a 
> > large hard drive can take more than a day to replace.  Write speeds 
> > just don't keep pace with capacities.  I do have offline backups but I 
> > shudder at the thought of how long one of those would take to restore.
> >
> 
> 
> Sadly, I don't have RAID here but to be honest, I really need to have it 
> given the data and my recent luck with hard drives.  Drives used to get 
> dumped because they were just to small to use anymore.  Nowadays, they seem 
> to break in some fashion long before their usefulness ends their lives. 
> 
> I remounted the drives and did a backup.  For anyone running up on this, just 
> in case one of the files got corrupted, I used a little trick to see if I can 
> figure out which one may be bad if any.  I took my rsync commands from my 
> little script and ran them one at a time with --dry-run added.  If a file was 
> to be updated on the backup that I hadn't changed or added, I was going to 
> check into it before updating my backups.  It could be that the backup file 
> was still good and the file on my drive reporting problems was bad.  In that 
> case, I would determine which was good and either restore it from backups or 
> allow it to be updated if needed.  Either way, I should have a good file 
> since the drive claims to have fixed the problem.  Now let us pray.  :-D 
> 
> Drive isn't under warranty.  I may have to start buying new drives from 
> dealers.  Sometimes I find drives that are pulled from systems and have very 
> few hours on them.  Still, warranty may not last long.  Saves a lot of money 
> tho. 
> 
> USPS claims drive is on the way.  Left a distribution point and should update 
> again when it gets close.  First said Saturday, then said Friday.  I think 
> Friday is about right but if the wind blows right, maybe Thursday. 
> 
> I hope I have another port and power cable plug for the swap out.  At least 
> now, I can unmount it and swap without a lot of rebooting.  Since it's on 
> LVM, that part is easy.  Regretfully I have experience on that process.  :/
> 
> Thanks to all. 
> 
> Dale
> 
> :-)  :-) 
> 
> 
You can get up to 16X SATA PCI-e cards these days for pretty cheap.  So as long 
as you have the power to run another drive or two there's not much reason not 
to do RAID on the important stuff.  Also, the SATA protocol allows for port 
expanders, which are also pretty cheap.  

One of my favorite things about BTRFS is the data checksums.  If the drive 
returns garbage, it turns into a read error.  Also, if you can't do real RAID, 
but have excess space you can tell it to keep two copies of everything.  
Doesn't help with total drive failure, but does protect against the occasional 
failed sector.  If you don't mind writes taking twice as long anyway.

LMP


RE: [gentoo-user] Hard drive error from SMART

2022-04-12 Thread Laurence Perkins


> -Original Message-
> From: Dale  
> Sent: Monday, April 11, 2022 6:28 PM
> To: gentoo-user@lists.gentoo.org
> Subject: [gentoo-user] Hard drive error from SMART
> 
> Given the low number and it showing it corrected that error, and then passed 
> a short and long test, is this drive "safe enough" to keep in service?  I 
> have backups just in case but just curious what others know from experience.  
> At least this isn't one of those nasty messages that the drive will die 
> within 24 hours.  I got one of those ages ago and it didn't miss it by much.  
> A little over 30 hours or so later, it was a door stop.  It would spin but it 
> couldn't even be seen by the BIOS.
> Maybe drives are getting better and SMART is getting better as well.
> 
> Thoughts.  Replace as soon as drive arrives or wait and see?
> 
> Dale
> 
> :-)  :-)
> 
When it's just one or two errors like that and they don't keep going up I tend 
to treat it as an isolated incident, but the drive still goes into the pool I 
use with RAID just in case.

Preferably a setup where you can lose more than one disk without losing the 
data.

Note that, depending on where the bad sector is, when it gets remapped the 
extra seek necessary to read that logical address could slow the drive down 
substantially.  Make sure your filesystem's root inode or something doesn't end 
up on top of it.

Sometimes I miss the old drives where all this was handled by the OS and so you 
knew exactly what sector was bad and your filesystem could be told to just not 
use it.  Made scanning for bad sectors more annoying, but deciding how bad the 
drive was rather easier.

LMP


RE: [gentoo-user] LVM and moving things around

2022-04-05 Thread Laurence Perkins

> -Original Message-
> From: Rich Freeman  
> Sent: Tuesday, April 5, 2022 11:59 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] LVM and moving things around
> 
> On Tue, Apr 5, 2022 at 11:10 AM Wols Lists  wrote:
> >
> > I don't know how you take advantage of it, but linux by default caches 
> > disk i/o. You can tell it to "don't cache" and apparently it makes a 
> > major difference. Given that rsync reads once and then never uses it 
> > again, you don't want it cached.
> >
> 
> I suggest reading:
> man  posix_fadvise
> https://insights.oetiker.ch/linux/fadvise/
> http://rdiez.shoutwiki.com/wiki/The_Linux_Filesystem_Cache_is_Braindead
> https://lwn.net/Articles/806980/
> https://bugzilla.samba.org/show_bug.cgi?id=9560
> 
> There might be something more recent, but my overall impression is that this 
> problem is less solved than it probably ought to be.
> 
> --
> Rich
> 
I remember seeing something more with regard to "please cache/please 
discard/this can be immediately swapped because I won't need it for a while" 
stuff for like the 5.15 kernel or something, but the actual system programs 
will have to be updated to use it before it'll make any difference.

LMP


RE: [SUSPECTED SPAM] Re: [gentoo-user] Choose a wireless access point

2022-04-05 Thread Laurence Perkins
> -Original Message-
> From: William Kenworthy  
> Sent: Monday, April 4, 2022 8:05 PM
> To: gentoo-user@lists.gentoo.org
> Subject: [SUSPECTED SPAM] Re: [gentoo-user] Choose a wireless access point
> 
> 
> On 4/4/22 23:12, Jack wrote:
> > On 4/4/22 01:31, William Kenworthy wrote:
> >> Is there a way force openrc and wpa_supplicant to map a particular 
> >> access point to an interface or fail?
> >>
> >> I have two AP's (each on a different ssid) to connect to so have two 
> >> wifi interfaces - unfortunately they are not equal so I want wlan0 to 
> >> connect to only one particular AP, and wlan1 to the other ... reliably!
> >> I can manually force it to connect but invariably at the first glitch 
> >> they both end up connected to the same AP (usually the strongest 
> >> which is often not what I want :(
> >>
> >> BillK
> >
> > I don't know about wpa-supplicant, but I'm using open-rc and KDE, and 
> > KDE's systemsettings Network / Connections screen lets you restrict a 
> > network connection so a specific device.  Not sure if this helps you 
> > any, but it would indicate that what you want is possible.
> >
> > Jack
> >
> Hi Jack, unfortunately its a headless, wifi only system which is why getting 
> openrc to behave is important!
> 
> BillK

The bit where specifying the SSID in conf.d/net doesn't work sounds like a bug 
to me, but one that may take a while to be fixed since I'm not sure how many 
people use netifrc with wireless.

If you're open to experimenting, NetworkManager will let you specify that 
connections may only be used with specific adapters.  While normally considered 
a GUI tool it does have nmcli and nmtui for configuring it on headless systems.

LMP




RE: [gentoo-user] LVM and moving things around

2022-04-05 Thread Laurence Perkins

>-Original Message-
>From: Dale  
>Sent: Monday, April 4, 2022 4:37 PM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] LVM and moving things around
>
>
>One thing that annoys me, it trying to use swap.  I don't want to disable it 
>because on occasion Firefox goes nuts and starting hogging memory really bad.  
>I have swappiness set to like 5 or something which means it shouldn't use it 
>but when using rsync, it creeps some in.  When it does, that results in some 
>slowness.  I have a little script thing that clears all that but still, I may 
>set it to 3 or maybe 2 for a bit. Me ponders the thought. 
>
>I'm making progress.  Feel sorry for those hard drives tho. ;-)
>
>Dale
>
>:-)  :-) 
>
>

I'm told that you can use cgroups for dealing with that kind of thing such 
that, for example, only Firefox is allowed to be swapped.  I haven't had time 
to dig into it, but it seems like a useful tool.

Also, the compressed swap and zram swap devices with backing stores offer a 
fairly significant boost to the speed of swap so long as the data being swapped 
is compressible.

LMP


RE: [gentoo-user] LVM and moving things around

2022-04-04 Thread Laurence Perkins
> -Original Message-
> From: Dale  
> Sent: Monday, April 4, 2022 5:42 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] LVM and moving things around
> 
> Bill Kenworthy wrote:
> > Rsync has a bwlimit argument which helps here. Note that rsync copies 
> > the whole file on what it considers local storage (which can be 
> > mounted network shares) ... this can cause a real slowdown.
> > BillK
> >
> >
> 
> I ended up just letting it do its thing.  I didn't want to slow it down by 
> much, just make my desktop able to respond better.  I used nice and ionice to 
> do this with emerge and it works great.  I just thought I was missing some 
> option for that command that google didn't help with.  I went and helped my 
> sis-n-law with some garden stuff.  That helped.  ;-)
> 
> As it stands now, I've copied enough over to get a free 8TB drive.  I set up 
> LUKS, which includes LVM, on the drive and am copying some more files onto 
> the newly encrypted drive.  Once everything is transferred, I'll then see if 
> I need the other drive added or not.  I may not at the moment.  Of course, 
> once fiber internet gets here, that may change pretty soon. 
> 
> If someone is really knowledgeable about LVM and LUKS and how to set up a 
> encrypted hard drive, not a whole install but just a data drive, a howto for 
> this would be really nice.  I had to use a LUKS howto and a LVM howto and 
> sort of merge commands until I figured out how to get the two together.  Even 
> tho I got it working, I'm still not real clear on how one part of it works.  
> I'm just not clear enough on it to write one myself.  A Gentoo wiki would be 
> nice.  There's one for the two separately but not together.  One posted 
> anywhere google can find it would be great tho.
> 
> Now to find something to do while rsync copies over some 6TBs of files. O_O
> 
> Dale
> 
> :-)  :-) 
> 
A little late to the party, but the other setting to look at when you're doing 
this kind of thing is "sysctl vm.dirty_ratio".
Basically it's what percentage of the disk cache can be dirty before the system 
forces all IO operations to be synchronous.

Setting it higher lets the system keep more data up-in-the-air while you're 
shuffling lots of stuff around.  Of course, it also risks losing more if the 
system crashes in the middle of it all, so use it judiciously.

Setting dirty_ratio dirty_background_ratio, and dirty_writeback_centisecs 
appropriately when doing things with large amounts of data can significantly 
improve system responsiveness and, with rotational drives, throughput.

LMP


RE: [gentoo-user] Sound not sounding

2022-03-24 Thread Laurence Perkins


> -Original Message-
> From: Mark Knecht  
> Sent: Wednesday, March 23, 2022 3:14 PM
> To: Gentoo User 
> Subject: Re: [gentoo-user] Sound not sounding
> 
> On Wed, Mar 23, 2022 at 3:05 PM Wol  wrote:
> >
> > On 23/03/2022 20:31, Mark Knecht wrote:
> 
>  Yeah, ain't that the truth! I'd likely be a Gentoo user if anyone 
> built and maintained a 'standard', stable distro, something delivered in 
> primarily binary files. I just lost patience dealing with all the issues you 
> guys deal with daily.
> 

I believe that's called Sabayon.  Standard, provided configurations and 
supported apps with a binary package server just like it was a binary distro, 
but if you really want to customize something it's still Gentoo under the fancy 
GUI.

I've not actually used the distro itself, but their overlay has been quite 
handy in the past.

ChromeOS is supposedly similar, at least it as at one point, but I don't trust 
them any farther than I can throw them, so I didn't investigate much and just 
ripped all their stuff out of my Chromebook and replaced it with real Gentoo.

LMP


RE: [gentoo-user] How to run X11 apps remotely?

2022-03-22 Thread Laurence Perkins
>-Original Message-
>From: Grant Edwards  
>Sent: Tuesday, March 22, 2022 9:42 AM
>To: gentoo-user@lists.gentoo.org
>Subject: [gentoo-user] How to run X11 apps remotely?
>
>CAUTION: This is an EXTERNAL email. Do not click links or open attachments 
>unless you recognize the sender and know the content is safe.
>
>How does one run "modern" X11 apps remotely?
>
>Using ssh -X or ssh -Y works fine for older applications, but not for things 
>that use "modern" toolkits. Modern tookit designers appear to have adopted a 
>life mission to maximize the number of client-server round-trips required for 
>even a trivial event like a keystroke in a text box.
>
>As a result, even with a 5-10Mbps remote connection, it takes several minutes 
>to enter a string of even a few characters. A mouseclick on a button can take 
>a minute or two to get processed. Resizing a window pretty much means it's 
>time for a cuppa.
>
>Opening chrome and loading a web page can take 10-15 minutes. No activity at 
>all on the screen, but the network connection to the remote machine is 
>saturated at 5Mbps for minutes at a time. WTF?
>
>Something like LibreOffice is completely unusable.
>
>Even something "lightweight" like atril is so slow it's barely usable.
>
>I do not want a "remote desktop". I just want to run a single application on a 
>remote machine and have its window show up locally.
>
>Back in the day, I used to run X11 apps remotely through dial-up connections, 
>and most of them were a little sluggish but still actually usable...
>
>X11 transparent network support was its killer feature, but for all practical 
>purpopses, that feature seems to have been killed.
>
>--
>Grant
>

As you mentioned, it's a lot of extra round-trips.  Which means that it's not 
primarily your bandwidth that's the limiting factor, it's the latency.

Unfortunately, the speed of light being what it is, there are practical limits 
to what you can do about latency depending on how far apart the systems in 
question are.

But, check for and mitigate any bufferbloat issues you may have, that will 
spike your latency quite a bit.

The key back in the day was that people used X11 primitives directly.  But the 
X11 primitives are ugly, and there weren't any tools for making them pretty.  
So rather than add those mechanisms all the toolkit authors just did their own 
thing and now everything is just bitmaps and practically no processing can be 
done locally.

Some programs like gVim will detect that they're running over SSH and fall back 
to basic X11 for the speed factor.  Not sure what browsers might do that.

LMP



RE: [gentoo-user] KDE, sddm etc security. Plus LVM question.

2022-03-22 Thread Laurence Perkins
> -Original Message-
> From: Rich Freeman  
> Sent: Monday, March 21, 2022 5:21 PM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] KDE, sddm etc security. Plus LVM question.
> 
> On Mon, Mar 21, 2022 at 8:03 PM Laurence Perkins  wrote:
> >
> > The TPM in most computers doesn't dump the keys if someone tries to open 
> > the case to install hardware sniffers.
> >
> 
> That's a good point, though if somebody with the ability to sniff the RAM or 
> (to a lesser degree) GPU traffic is after you, then you probably want to be 
> on the lookout for rubber hose decryption.
> 
> If you're a big spender the AMD Secure Memory Encryption feature would 
> probably help there, assuming they ever get it working on Linux.
> 
> --
> Rich
> 
> 

There are a lot of circumstances where the attacker doesn't want you to know 
you've been compromised.  At least not right away.  Tamper detection can be 
useful for avoiding that.

Along similar lines there's a kernel module available, the project name slips 
my mind, which can be set to wipe keys, memory, etc. if the system is booted 
without a particular USB device attached, or if the device is removed.  Gives 
one a way to quickly "decommission" a system.

LMP


RE: [gentoo-user] KDE, sddm etc security. Plus LVM question.

2022-03-21 Thread Laurence Perkins
>-Original Message-
>From: Rich Freeman  
>Sent: Monday, March 21, 2022 1:08 PM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] KDE, sddm etc security. Plus LVM question.
>
>On Mon, Mar 21, 2022 at 2:30 PM Laurence Perkins  wrote:
>>
>> Having it remain unlocked and capable of rebooting unless the accelerometer 
>> showed movement I think was an option since the TPM kept monitoring even if 
>> the mains power was interrupted.
>>
>
>Yeah, there might still be ways to accomplish it with features like this.
>
>>
>> Could probably do something similar these days with one of those $3 bluepill 
>> boards and one of those new 3d printers capable of embedding metal though.
>
>Or you could just use the TPM that is probably already in your computer...  :)
>
>-- 
>Rich
>
>
The TPM in most computers doesn't dump the keys if someone tries to open the 
case to install hardware sniffers.  

Not that it isn't sufficient for the average person's security needs, but it 
definitely still has a lot of attack vectors.

A case with active mesh and a customizable TPM module with extra sensors, 
hardware overrides, and that could take standard-sized desktop components might 
be useful for small business applications where they have sensitive data but 
can't afford a whole secured data center.

LMP


RE: [gentoo-user] KDE, sddm etc security. Plus LVM question.

2022-03-21 Thread Laurence Perkins
>-Original Message-
>From: Rich Freeman  
>Sent: Monday, March 21, 2022 11:07 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] KDE, sddm etc security. Plus LVM question.
>
>On Mon, Mar 21, 2022 at 12:17 PM Laurence Perkins  wrote:
>>
>> There was the ORWL project a few years ago.  Self-encrypting SSD drive with 
>> a TPM that would unlock it only in the presence of an encrypted RFID tag 
>> plus tapping in a code on the keypad, with all the sensitive bits wrapped in 
>> an active mesh system that would destroy the data if it detected any 
>> tampering.
>
>While I can see this being useful if for some reason you don't have support 
>for encryption on the software side, something like this seems like it 
>wouldn't actually solve the unattended boot problem, since you have to enter a 
>PIN.  If you don't require the PIN and leave the RFID tag sitting next to the 
>drive all the time, then anybody can walk in and take the drive and the tag 
>and then read the data off the drive bypassing the OS.  So it offers at best 
>the same protection as a LUKS passphrase entered at boot, and at worst no 
>protection at all.  It would have the advantage that you wouldn't be able to 
>attack the passphrase itself as no doubt the PIN only offers limited attempts 
>and would be very difficult to bypass.
>
>The advantage of the TPM in the computer is that you can do unattended 
>verified boot, so the disk can only be decrypted if the OS boots normally 
>without tampering.  Obviously you're still open to OS vulnerabilities, but the 
>drive itself cannot be accessed except via the OS.  The TPM chip can actually 
>supervise the boot process.
>
>Still an interesting product though.  I could see it being useful if you had 
>to run some specific OS that doesn't support disk encryption natively.
>
>--
>Rich
>
>

As I recall there were several security modes related to the battery-backed 
arm-based TPM, (which also had source code available, so it could probably do 
anything you wanted.)

Having it remain unlocked and capable of rebooting unless the accelerometer 
showed movement I think was an option since the TPM kept monitoring even if the 
mains power was interrupted.  

And there's still the standard BIOS-level passwords and secureboot stuff to 
keep someone from switching the OS.  As long as you use your own keys and 
remove the Microsoft one anyway.

Any attempt to open the case would disrupt the active mesh and cause it to dump 
the keys, so that takes care of attacking the drive itself or the system memory.

Main problem was that it was just too expensive.  Could put a standard computer 
in a safe for far less, and the group of people who need paranoid security on 
the move just wasn't enough to support ongoing updates to the design.

Could probably do something similar these days with one of those $3 bluepill 
boards and one of those new 3d printers capable of embedding metal though.

LMP


RE: [gentoo-user] KDE, sddm etc security. Plus LVM question.

2022-03-21 Thread Laurence Perkins
>>
>>
>>-Original Message-
>>From: Rich Freeman  
>>Sent: Sunday, March 20, 2022 11:03 AM
>>To: gentoo-user@lists.gentoo.org
>>Subject: Re: [gentoo-user] KDE, sddm etc security. Plus LVM question.
>>
>>On Sun, Mar 20, 2022 at 12:05 PM Daniel Frey  wrote:
>>>
>>> They don't even need to defeat a password. If they have root, it's 
>>> trivial to unlock a locked session without knowing the password - just FYI.
>>> ...
>>> The screen locks in linux are security by obscurity, if something is 
>>> that sensitive, don't stay logged in all the time.
>>
>>If somebody has root access to your box, then they are going to be able to 
>>get at your data.  They don't have to unlock your session to do it - they 
>>have access to the memory of all your processes, everything on disk, and so 
>>on.  If you're using encryption at the account level and it is 
>>well-implemented then root probably can't get at your data while you aren't 
>>logged in, but they certainly can get it the next time you log in.
>>
>>It is true though that linux screensavers are often not well-implemented.  
>>Honestly, I'm not sure if any of them are - it seems to be more of an 
>>afterthought in the design layered on top.  I haven't made a study of them, 
>>so maybe there are some which are, but something like this really needs to be 
>>designed into the system to be secure, and some of that needs to be treated 
>>as security-critical code.
>>
>>Now, if you want to make an argument for leaving systems powered down except 
>>when needed if they contain sensitive data that would certainly reduce the 
>>opportunity for intrusion, but you still need the OS to keep people from 
>>gaining root in the first place.
>>
>>As others have mentioned at the start of the thread, if you're concerned with 
>>physical security then full disk encryption (or at least encryption of data 
>>combined with airtight authentication of the
>>OS) has to be part of the solution.  In 99% of linux-based solutions that 
>>requires entering a password at boot.  In theory the linux kernel has support 
>>for TPM verified boot, so you could implement something like Bitlocker/etc on 
>>Linux, but I'm not aware of any distros that have done so (unless you want to 
>>count something like ChromeOS).  For a desktop system a boot password isn't 
>>as much of a problem, but if you want an unattended server to be able to boot 
>>on power restoration then a TPM-based solution would be better.  It certainly 
>>is prettier on the desktop, and allows for more recovery options, which is 
>>why just about all corporate laptops I've seen do it this way.  Of course 
>>without a boot password you're only as secure as your OS, as any attacker can 
>>still boot the OS and attack it while it is running, which they can't do if 
>>the disk requires a password to decrypt it.
>>
>>If you're running Windows on a system with a TPM the simplest solution to all 
>>this stuff is to turn on Bitlocker, though this is not available on the Home 
>>edition of Win10.
>>
>>--
>>Rich
>>
>>

There was the ORWL project a few years ago.  Self-encrypting SSD drive with a 
TPM that would unlock it only in the presence of an encrypted RFID tag plus 
tapping in a code on the keypad, with all the sensitive bits wrapped in an 
active mesh system that would destroy the data if it detected any tampering.

Plus it could be set to lock down if the accelerometer went off, or after 
inactivity, which would disable whatever set of external ports was set.

Don't seem to be made any more, which is a pity.  But the design was getting to 
be a bit dated.  Schematics are theoretically still available somewhere if you 
want to build one.

LMP


RE: [gentoo-user] Any way to run multiple commands from single script in parallel?

2022-03-14 Thread Laurence Perkins
If you don't want to do thread management yourself in bash then you can use 
something like GNU Parallel (in the repo) to handle forking and collating 
processes for you.

Parallel in particular has the additional advantage that it's capable of 
shipping tasks off to other machines via SSH, so if you get to the point where 
you need a whole cluster to do your processing it's just a matter of adding a 
couple of arguments.

LMP

-Original Message-
From: Ramon Fischer  
Sent: Monday, March 14, 2022 3:37 AM
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] Any way to run multiple commands from single script 
in parallel?

Hello Joost,

I suppose, that you are talking about Bash scripts.

If so, you may put each individual command in a subshell by using an ampersand 
("&") at the end of the line.

This example[1] shows it nicely.

-Ramon

[1] 3. Parallelize running commands by grabbing PIDs.: 
https://will-keleher.com/posts/5-Useful-Bash-Patterns.html

On 14/03/2022 11:13, J. Roeleveld wrote:
> Hi,
>
> I often put multiple commands into a single file/script to be run in sequence.
> (each line can be executed individually, there is no dependency)
>
> Is there a tool/method to execute multiple lines/commands 
> simultaneously? Like having 3 or 4 run together and when 1 is 
> finished, it will grab the next one in the list?
>
> I would prefer this over simply splitting the file as the different 
> lines/ commands will not take the same amount of time.
>
> Thanks,
>
> Joost
>
>
>

--
GPG public key: 5983 98DA 5F4D A464 38FD CF87 155B E264 13E6 99BF



RE: [gentoo-user] Re: Root can't write to files owned by others?

2022-03-11 Thread Laurence Perkins
>-Original Message-
>From: Neil Bothwick  
>Sent: Friday, March 11, 2022 6:59 AM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] Re: Root can't write to files owned by others?
>
>On Fri, 11 Mar 2022 12:38:48 +0100, Dr Rainer Woitok wrote:
>
>> No.   My "/tmp/" directory  is not mounted at all,  it is just a genuine
>> directory in "/".   And that root CAN overwrite a file it doesn't own in
>> other directories, is due to most directories  not having the sticky 
>> bit set  (which is a  (wanted) particularity  of "/tmp/" and 
>> "/var/tmp/", in that it prevents normal users from (re)moving other people's 
>> files):
>
>It's not the sticky bit per se from what I've read, but the new default 
>prevents root from overwriting a file if the file and the directory containing 
>it have different owners. In most cases, the file has the same directory as 
>the owner so this does not happen, but the sticky bit allows users that don't 
>own the directory to create files in it.
>
I was just looking at the patch.  In at least one of its modes I think it's 
specifically looking for the sticky bit on the directory.  I didn't think to 
pay attention to what the new default setting ends up doing for which specific 
mode it goes with.

LMP



RE: [gentoo-user] Re: Root can't write to files owned by others?

2022-03-10 Thread Laurence Perkins
>
>
>-Original Message-
>From: Dr Rainer Woitok  
>Sent: Thursday, March 10, 2022 9:51 AM
>To: gentoo-user@lists.gentoo.org; Nikos Chantziaras 
>Subject: [gentoo-user] Re: Root can't write to files owned by others?
>
>Nikos,
>
>On Thursday, 2022-03-10 12:21:36 +0200, you wrote:
>
>> ...
>> Are you sure that:
>> 
>> sysctl fs.protected_regular=0
>> 
>> does not help? I can reproduce it here on my system with kernel 
>> 5.15.27, and setting that sysctl to 0 fixes it immediately.
>
>No,  I'm not at all sure.   Since you mentioned  in your first mail that
>this is normal  when using  "systemd",  I did not pursue  this route any 
>further, because I'm using "openrc".
>
>I'll search the web for "fs.protected_regular"  to get a feeling for the 
>consequences and then perhaps set this when I'll again boot kernel vers- ion 
>5.15.26.
>
>Thanks for being persistent :-)
>
>Sincerely,
>  Rainer
>
>

Basically the idea is to keep other users from being able to trick root into 
writing sensitive data to something they control.
It's a "systemd thing" because, apparently, the systemd developers decided to 
have systemd enable it instead of leaving it in the bailiwick of the distros' 
configurations.
But if the default setting changed in a later kernel as well, that would 
potentially affect everyone, so a quick check of what it's set to wouldn't be 
amiss.

LMP



RE: [gentoo-user] Re: Root can't write to files owned by others?

2022-03-10 Thread Laurence Perkins
>On 09/03/2022 20:28, Dr Rainer Woitok wrote:
>> until recently my system behaves sort of strangely:
>> 
>> $ echo x | sudo tee /tmp/file
>> Password:
>> tee: /tmp/file: Permission denied
>> [...]
>> 
>> Since when can't root write to files  it doesn't own?   And not even, if
>> the file has write permission for everybody?
>
>This is normal, at least when using systemd. To disable this behavior, you 
>have to set:
>
>   sysctl fs.protected_regular=0
>
>But you should know what this means when it comes to security. See:
>
>   https://www.spinics.net/lists/fedora-devel/msg252452.html
>
>

And they chose to have systemd set that instead of putting it in sysctl.conf or 
the default kernel settings where it belongs?  Good grief!  

I guess if you're going to use systemd you need to subscribe to the Fedora 
mailing lists so you get at least a little notice before they break things.

LMP


RE: [gentoo-user] planning a new machine : comments welcome

2022-03-04 Thread Laurence Perkins
>
>
>-Original Message-
>From: Wols Lists  
>Sent: Friday, March 4, 2022 1:51 PM
>To: gentoo-user@lists.gentoo.org
>Subject: Re: [gentoo-user] planning a new machine : comments welcome
>
>On 04/03/2022 21:18, Laurence Perkins wrote:
>> I’d probably wait another year for all the bugs to work their way out 
>> of the system before trying it on purpose, but if you have 
>> non-critical systems to play with, well, the SMR drives are rather a 
>> lot cheaper…  I got a few for one of my server chassis because they 
>> can cram 5TB of storage into a 2.5” drive that fits in the bay for only $120.
>
>But are they?
>
>Okay, this is a 3.5", but I paid £180 I think for 8TB in a decent CMR package 
>(Toshiba N300). It was a backup drive, I was open to SMR, but didn't think 
>they were value for money.
>
>Cheers,
>Wol
>
>

3.5" gives a lot more platter area. Pi-r-squared and all that.  I couldn't find 
anything larger than 1TB CMR in a 2.5" package from anyone, and those were old 
stock.  Seagate doesn't make *any* CMR 2.5" anymore, not even the 500GB ones.
Even flash devices are going zoned it seems.  Simplifies the controller and 
lets you cram more storage onto the chips.

Unfortunately the chassis in question only takes 2.5" or smaller.  Otherwise I 
would definitely have preferred some 3.5", 12TB CMR surveillance-grade drives.

LMP


RE: [gentoo-user] planning a new machine : comments welcome

2022-03-04 Thread Laurence Perkins
There’s nothing actually wrong conceptually with SMR drives in RAID.  The write 
order used by the RAID system simply needs to be appropriate for such a drive.  
The early SMR drives tried to hide what they were, and simply didn’t have 
sufficient cache area for non-sequential workloads in any volume.  This is 
frankly a QA failure on the part of WD, Seagate, and Toshiba.  They assumed 
that most RAID systems were for bulk storage, SMR drives are great for getting 
more bulk storage in a smaller space, why wouldn’t that be awesome?  They 
obviously never did more than cursory testing.  The fact that some of these 
drive-managed units also didn’t actually succeed at hiding what they were and 
would return strange errors under certain circumstances probably didn’t help 
things.  Basically alpha technology pushed out in secret and it failed to 
perform.  There’s been at least one lawsuit about it along false advertising 
lines.

However, over the last year, there has been a lot of work done on updating 
filesystems, raid controllers, kernels, and everything else to handle zoned 
storage, I’ve seen firmware updates going past for a lot of hardware RAID 
controllers, along with updates for mdraid, BTRFS, ZFS, etc.  So as long as you 
get drives that are either host-aware or host-managed and have the very latest 
software for your setup they should no longer crash and burn on a RAID rebuild.

I’d probably wait another year for all the bugs to work their way out of the 
system before trying it on purpose, but if you have non-critical systems to 
play with, well, the SMR drives are rather a lot cheaper…  I got a few for one 
of my server chassis because they can cram 5TB of storage into a 2.5” drive 
that fits in the bay for only $120.

LMP

From: Julien Roy 
Sent: Thursday, March 3, 2022 2:55 PM
To: Gentoo User 
Subject: Re: [gentoo-user] planning a new machine : comments welcome

Even of WD's official page for WD Reds, they are still advertised for RAID: 
https://www.westerndigital.com/products/internal-drives/wd-red-sata-hdd#WD20EFAX

For instance :
●  Reliability: The always-on environment of a NAS or RAID is a hot one, and 
desktop drives aren’t typically designed and tested under those conditions like 
WD Red™ is.
●  Error recovery controls: WD Red™ NAS hard drives are specifically designed 
with RAID error recovery control to help reduce failures within the NAS system.

Or:

Designed with SMR technology for workloads associated with personal and home 
office, such as storing, archiving and sharing, in RAID-optimized NAS systems 
with up to 8 bays.
This isn't just NewEgg or Amazon puting outdated information on the product 
page, it is WD still advertising today that the SMR WD Reds are appropriate for 
RAIDs, while they are, in fact, not.

Regarding the blogpost: I don't think a blogpost is appropriate for a company 
to correct information about deffective product, when the advertising for the 
said product remains uncorrected.
Nevertheless, even in their blogpost, they aren't being entirely honest, since 
they still claim that WD Reds are appropriate for RAIDs:

"In a RAID rebuild scenario using a typical Synology or QNAP (non-ZFS) 
platform, WD Red DMSMR drives perform as well as CMR drives or show slightly 
longer RAID rebuild times, depending on the condition of the drive and extent 
of rebuild required."
They double down on this further in the blogpost.

From some quick research, this is only true in a RAD1 array. In any other type 
of array, it is sometimes impossible to expand or rebuild the array with SMR 
drives.
See: https://raid.wiki.kernel.org/index.php/Timeout_Mismatch


Julien



Mar 3, 2022, 16:27 by markkne...@gmail.com:
On Thu, Mar 3, 2022 at 2:11 PM Julien Roy 
mailto:jul...@jroy.ca>> wrote:

The WD Reds are still marketted as RAID compatible to this day, despite the 
fact that they are SMRs.

Julien

18 months ago WD put out this statement:

https://blog.westerndigital.com/wd-red-nas-drives/

If by 'marketed' you mean Amazon or Newegg or some other seller is telling
people that the SMR is a good RAID solution I wouldn't say that's on WD
but rather the vendor.

Just my 2 cents,
Mark



RE: [gentoo-user] planning a new machine : comments welcome

2022-03-03 Thread Laurence Perkins
> 
> 
> -Original Message-
> From: Dale  
> Sent: Wednesday, March 2, 2022 5:08 PM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] planning a new machine : comments welcome
> 
> Laurence Perkins wrote:
> > With regard to SMR drives, note that there are three basic types:
> >
> > Some completely hide the fact that they are SMR.  These suck, hands down.  
> > Performance is unpredictable and random.
> > Some at least advertise that they're SMR, and expose basic counters about 
> > where they are in their maintenance cycles.  These still suck, but at least 
> > you can kind of predict when they're about to get really slow.
> >
> > The best ones actually advertise what the shingled ranges are, at which 
> > point a new enough kernel and filesystem can keep the writes to those 
> > ranges as sequential as possible, and you can use the big, cheap drives 
> > with very little performance loss.
> >
> > There are a couple articles explaining how to determine what you've got and 
> > optimize it.  I don't have my bookmarks to hand, but it was in a discussion 
> > on this list a few months ago.
> >
> > LMP
> >
> 
> That's some new info.  I tend to follow threads, even started one ages ago 
> about my hard drive doing a bumpy thing for a long time after I updated my 
> backups.  Rich plus others informed me I unknowingly bought a SMR drive.  I 
> think mine has about a 15 or 20GB CMR section.  I've noticed if my updates go 
> to about that much or more, it gets slow. Either way, it does the bumpy thing 
> for a good while after my backups are done and I've unmounted the drive.  I 
> just let it sit there until it gets done.  If I don't, it just slows down 
> faster the next time because it starts out behind on moving the files from 
> CMR to SMR and doing its rewrite thing. 
> 
> My biggest point for the OP, look at its use and pick what works as expected. 
>  I've read, and Wol seems to confirm this, that RAID and SMR do not go 
> together well.  I've read some have hosed RAID thingys when they put in a SMR 
> drive and didn't know it. 
> 
> The biggest problem I have is when they don't let us know when a drive is 
> SMR.  I don't like a company that sells me something that isn't as good 
> without telling me.  It sort of rubs me the wrong way. 
> 
> To the OP tho, research first, then buy.  Know what you getting and that it 
> will work for your needs.  As I said, for the most part, my backup drive 
> being SMR is mostly a little annoying.  It does work.  I just won't do it 
> again tho.
> 
> Dale
> 
> :-)  :-) 
> 
> 

Yeah, A lot of RAID controllers will see the bumpy performance and error out.  
Some vendors are starting to update their firmware though.
You're not alone in being upset about manufacturers trying to sneak SMR in 
without telling anyone.  The CCTV industry kind of took them to the woodshed 
over it.
BTRFS got patches for detecting and optimizing zoned drives in 2020.  
EXT4 has a format-time option for it at this point.  
F2FS and NILFS2 are both designed to write a disk end-to-end sequentially for 
wear-levelling purpose and while they're not the fastest, they also don't see 
significant performance degradation from SMR except during schedulable 
maintenance operations.
https://zonedstorage.io/ has a good list of reference material for figuring out 
what you've got and getting it configured in the best way.

LMP


  1   2   >