Re: [gentoo-user] Gentoo on a cell?

2020-02-29 Thread antlists

On 29/02/2020 17:40, james wrote:
is if the US government returns to the fundamental christian value 
system, that made our country great. Greed, un-bridled, is changing 
the quality of our lives, regardless of your personal belief systems. 


Our country? I think you mean YOUR country. And seen from outside, that 
greed has been there pretty much from its birth ...


I'll agree about greed being the problem, though. But it's pretty 
difficult to return to a place you've never been, imho ...



Cheers,

Wol




Re: [gentoo-user] ...recreating exactly the same applications on a new harddisc?

2020-04-07 Thread antlists

On 07/04/2020 00:38, Michael wrote:

Perhaps older UEFI specifications allowed Mac-baked filesystems, or perhaps
Apple were/are doing their own thing.  The current UEFI specification
*requires*  a FAT 12/16/32 filesystem type on an ESP partition to boot an OS
image/bootloader from - see section '13.3 File System Format':


Reading the spec, it said "must *support*", not must *require*.

What I was told - by someone I see no reason to disbelieve - was that if 
a vendor wants to support a different filesystem *in addition*, provided 
it supports all the calls then there's no problem.


(Incidentally, if that's the final spec, I think I've spotted a mistake 
in it - it clearly doesn't actually mean what it says in at least one 
place ...)


Cheers,
Wol



Re: [gentoo-user] ...recreating exactly the same applications on a new harddisc?

2020-04-06 Thread antlists

On 05/04/2020 13:52, Neil Bothwick wrote:

This isn't strictly true, the ESP must be vfat, but you can still have an
ext? /boot.


This isn't true at all - you've got the cart before the horse. The 
original (U)EFI spec comes from Sun, I believe, with no vfat in sight.


A standards-compliant factory-fresh Mac boots using UEFI with no vfat in 
sight.


A standards-compliant UEFI firmware MUST BE CAPABLE of booting from (a 
certain version of) vfat. So if you use a vfat partition the spec says 
it must work. It doesn't demand you use vfat, so Macs use HFS+ or 
whatever it is, and there is no reason why eg System7 couldn't write 
their own firmware to use ext2 or whatever.


Cheers,
Wol



Re: [gentoo-user] Understanding fstrim...

2020-04-13 Thread antlists

On 13/04/2020 17:05, Rich Freeman wrote:

And what takes time when doing a "large" TRIM is transmitting a
_large_  list of blocks to the SSD via the TRIM command. That's why
e.g. those ~6-7GiB trims I did just before (see my other mail) took a
couple of seconds for 13GiB ~ 25M LBAs ~ a whole effin bunch of TRIM
commands (no idea... wait, 1-4kB per TRIM and 4B/LBA is max. 1k
LBAs/TRIM and for 25M LBAs you'll need minimum 25-100k TRIM
commands... go figure;)  no wonder it takes a second or few;)



There is no reason that 100k TRIM commands need to take much time.
Transmitting the commands is happening at SATA speeds at least.  I'm
not sure what the length of the data in a trim instruction is, but
even if it were 10-20 bytes you could send 100k of those in 1MB, which
takes <10ms to transfer depending on the SATA generation.


Dare I say it ... buffer bloat? poor implementation?

aiui, the spec says you can send a command "trim 1GB starting at block 
X". Snag is, the linux block size of 4KB means that it gets split into 
loads of trim commands, which then clogs up all the buffers ...


Plus all too often the trim command is synchronous, so although it is 
pretty quick, the drive won't accept the next command until the previous 
one has completed.


Cheers,
Wol



Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-11 Thread antlists

On 06/04/2020 14:08, Ashley Dixon wrote:

After my thankfully-brief experience with the likes of Microsoft and their
Exchange program, I always question how much impact the content of an R.F.C.
actually has on an implementation.


:-)

Okay, it was a long time ago, and it was MS-Mail (Exchange's 
predecessor, for those who can remember back that far), but I had an 
argument with my boss. He was well annoyed with our ISP for complying 
with RFC's because they switched to ESMTP and MS-Mail promptly broke. 
The *ONLY* acceptable reason for terminating a connection is when you 
recieve the command "BYE", so when Pipex sent us the command EHLO, 
MS-Mail promptly dropped the connection ...


Pipex, and I suspect other ISPs, had to implement an extended black list 
of customers who couldn't cope with ESMTP.


Cheers,
Wol



Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-11 Thread antlists

On 07/04/2020 11:53, Ashley Dixon wrote:

Grant's mail server, I assume, is configured with the highest security in mind,
so I can see how a mail server with a dynamic I.P.\ could cause issues in some
contexts. I just wish my I.S.P.\ offered_any_  sort of static I.P.\ package, but
given that I live in remote area in the north of England, I.S.P.s aren't exactly
plentiful.


https://www.aa.net.uk/

Andrews and Arnold. From what I know of them, a fair few people who know 
what they're talking about say they're good. Sounds they're like what 
Demon were before Clueless and Witless took them over.


Cheers,
Wol



Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-11 Thread antlists

On 11/04/2020 21:33, Grant Taylor wrote:

On 4/11/20 2:08 PM, antlists wrote:
Okay, it was a long time ago, and it was MS-Mail (Exchange's 
predecessor, for those who can remember back that far), but I had an 
argument with my boss. He was well annoyed with our ISP for complying 
with RFC's because they switched to ESMTP and MS-Mail promptly broke.


I don't recall any RFC (from the time) stating that ESMTP was REQUIRED. 
It may have been a SHOULD.


The ISP chose to make the change that resulted in ESMTP.


Yes. But as per spec ESMTP was/is compatible with SMTP.


Also, I'm fairly sure that MS-Mail didn't include SMTP in any capacity. 
It required an external MS-Mail SMTP gateway, which I Microsoft did 
sell, for an additional cost.


Yes, it is the gateway I'm talking about ...

Which was also a pain in the neck because it was single-threaded - if 
the ISP tried to send an incoming email at the same time the gateway 
tried to send, the gateway hung. You could pretty much guarantee most 
mornings I'd be in the server room deleting a bunch of private emails 
from the outgoing queue, and repeatedly rebooting until the queues in 
both directions managed to clear.


The *ONLY* acceptable reason for terminating a connection is when you 
recieve the command "BYE", so when Pipex sent us the command EHLO, 
MS-Mail promptly dropped the connection ...


I'll agree that what you're describing is per the (then) SMTP state 
machine.  We have sense seen a LOT of discussion about when it is proper 
or not proper to close the SMTP connection.


The point is that when the server sends EHLO, it is *not* a *permitted* 
response for the client to drop the connection.


If the MS-Mail SMTP gateway sent a 5xy error response, it could see how 
it could subsequently close the connection per protocol state machine.


Pipex, and I suspect other ISPs, had to implement an extended black 
list of customers who couldn't cope with ESMTP.


If the MS-Mail SMTP gateway hadn't closed the connection and instead 
just returned an error for the command being unknown / unsupported, 
Pipex would have quite likely tried a standard HELO immediately after 
getting the response.


That was the specification for ESMTP - the client should reject EHLO, 
the server tries again with HELO, and things (supposedly) proceed as 
they should. Which they can't, if the client improperly kills the 
connection.


Also, we're talking about the late '90s during the introduction of 
ESMTP, which was a wild west time for SMTP.


Which shouldn't have been a problem. ESMTP was designed to fall back 
gracefully to SMTP. But if clients don't behave correctly as per the 
SMTP spec, how can the server degrade gracefully?


Cheers,
Wol



Re: [gentoo-user] Re: SDD, what features to look for and what to avoid.

2020-04-02 Thread antlists

On 02/04/2020 13:37, J. Roeleveld wrote:

Same here, the colour cartridges have been saying they're "critically low" for
the past couple of months. As they don't expire, I did order a new set when
they got low. Those are still sealed in storage.


Yup. I ordered a set when they hit critically low. And replaced the 
yellow when it started looking naff. The other colours still look fine. 
The printer has been registered for its "free" 3-yr warranty, but it's a 
requirement that you must use only genuine HP cartridges. So I'm 
reckoning that once this first set of replacement cartridges runs out, 
the printer will be over three years old. The second set of cartridges 
will be compatibiles, and if I pay £20 instead of £100 per cartridge, 
that'll save me enough to buy a new printer if the old one breaks. I 
think the set of four XL cartridges cost more than the printer did!


Printers don't seem to last nowadays, anyway. I remember somebody asking 
about "old printers", and there were no "medium age" printers - they 
were all either less than five years old, or ancient LaserJet 4 era 
printers. Anything in-between had died (or, like one of my old printers, 
been sunk by production of consumables stopping).


Cheers,
Wol



Re: [gentoo-user] how do you monitor your pc?

2020-04-02 Thread antlists

On 02/04/2020 12:29, Caveman Al Toraboran wrote:

[question 1] i wonder how do you monitor your pc?


While it doesn't look for problems, I always have xosview running on my 
desktop. It tells me when the system is struggling, and it supposedly 
monitors things like raid.


Cheers,
Wol



Re: [gentoo-user] Still questions concerning a reasonable setup of a new system: UEFI &&/|| MTBR

2020-03-28 Thread antlists

On 28/03/2020 06:19, tu...@posteo.de wrote:

 From what I read on the internet:
Everything bigger than 2TB needs to be GPT-formatted.
Is there anything better than gparted for that job?


gdisk? fdisk?

Basically, I think pretty much all of the popular linux utilities have 
been updated.


You could format it with an MBR, but that would only allow you to use 2GB.

Cheers,
Wol



Re: [gentoo-user] Re: SDD, what features to look for and what to avoid.

2020-04-01 Thread antlists

On 01/04/2020 22:46, Dale wrote:

I still haven't bought it yet.  I ordered some toner cartridges a while
back for my printer.  The site said that the ones I ordered fits my
printer.  Well, it appears they found out that was a error because they
removed that page and relisted it but did not include my printer model.
So, I had to order a whole new set, at about $100.00 each for high
yield.  Needless to say, I'll be paying on that for a while.  I'll try
to sell the wrong ones later.  I only opened one color so the others are
still sealed.


I know you're not UK, but under European rules you'd be able to return 
that opened cartridge at their expense - "not as described" and it's 
their problem even if you had to open it to find out.


BTW, next time I'll find a printer that allows refilling and such too.
I don't like that chip thing.  It counts against my page count on color
even if I print a black and white page.  Still, printer does a awesome
job.  Beats those ink jet thingys by a country mile.


My HP laser is like that. Mind you, my colour cartridges "ran out" at 
about 1900 pages. They're still going fine (well, the yellow got 
replaced) at 4000 pages. But HP do say that a cartridge running empty 
won't damage the printer - all the vulnerable parts are in the cartridge.


Cheers,
Wol



Re: [gentoo-user] "Amount" of fstrim? (curiosity driven, no paranoia :)

2020-04-27 Thread antlists

On 27/04/2020 17:59, Rich Freeman wrote:

Really though a better solution than any of this is for the filesystem
to be more SSD-aware and just only perform writes on entire erase
regions at one time.  If the drive is told to write blocks 1-32 then
it can just blindly erase their contents first because it knows
everything there is getting overwritten anyway.  Likewise a filesystem
could do its own wear-leveling also, especially on something like
flash where the cost of fragmentation is not high.  I'm not sure how
well either zfs or ext4 perform in these roles.  Obviously a solution
like f2fs designed for flash storage is going to excel here.


The problem here is "how big is an erase region". I've heard comments 
that it is several megs. Trying to consolidate writes into megabyte 
blocks is going to be tricky, to say the least, unless you're dealing 
with video files or hi-res photos - I think the files my camera chucks 
out are in the 10MB region ... (24MP raw...)


Cheers,
Wol



Re: [gentoo-user] Trouble with backup harddisks

2020-04-30 Thread antlists

On 30/04/2020 18:04, tu...@posteo.de wrote:

I copied the first 230GB of that disk to an empty partition of my new
system and run "testdisk" on itafter the analysis it came back
with "this partition cannot be recovered" but did not sau. whether the
reason is a partition table, which is broken beyond repair, or simply
due to the incomplete image file...


Just come up with another idea that will hopefully give us some clues...

https://raid.wiki.kernel.org/index.php/Asking_for_help#lsdrv

Ignore the fact that it's the raid wiki - this tool basically delves 
into the drive and tries to find mbr, gpt, superblock, lvm and raid 
signatures, everything ...


So run that and see what it tells us ...

Cheers,
Wol



Re: [gentoo-user] Trouble with backup harddisks

2020-05-01 Thread antlists

On 01/05/2020 09:03, tu...@posteo.de wrote:

Hi Wol,

data copied !:)

I did a

 mdadm --examine /dev/sdb


Except I pointed you at a utility called lsdrv, not mdadm ... :-)

Cheers,
Wol



Re: [gentoo-user] which linux RAID setup to choose?

2020-05-03 Thread antlists

On 03/05/2020 18:55, Caveman Al Toraboran wrote:

On Sunday, May 3, 2020 1:23 PM, Wols Lists  wrote:


For anything above raid 1, MAKE SURE your drives support SCT/ERC. For
example, Seagate Barracudas are very popular desktop drives, but I guess
maybe HALF of the emails asking for help recovering an array on the raid
list involve them dying ...

(I've got two :-( but my new system - when I get it running - has
ironwolves instead.)


that's very scary.

just to double check:  are those help emails about
linux's software RAID?  or is it about hardware
RAIDs?


They are about linux software raid. Hardware raid won't be any better.


the reason i ask about software vs. hardware, is
because of this wiki article [1] which seems to
suggest that mdadm handles error recovery by
waiting for up to 30 seconds (set in
/sys/block/sd*/device/timeout) after which the
device is reset.


Which if your drive does not support SCT/ERC then goes *badly* wrong.


am i missing something? 


Yes ...


to me it seems that [1]
seems to suggest that linux software raid has a
reliable way to handle the issue? 


Well, if the paragraph below were true, it would.


since i guess all disks support resetting well?


That's the point. THEY DON'T! That's why you need SCT/ERC ...


[1] https://en.wikipedia.org/wiki/Error_recovery_control#Software_RAID


https://raid.wiki.kernel.org/index.php/Choosing_your_hardware,_and_what_is_a_device%3F#Desktop_and_Enterprise_drives

https://raid.wiki.kernel.org/index.php/Timeout_Mismatch

Cheers,
Wol



Re: [gentoo-user] which linux RAID setup to choose?

2020-05-03 Thread antlists

On 03/05/2020 22:46, Caveman Al Toraboran wrote:

On Sunday, May 3, 2020 6:27 PM, Jack  wrote:


curious.  how do people look at --layout=n2 in the
storage industry?  e.g. do they ignore the
optimistic case where 2 disk failures can be
recovered, and only assume that it protects for 1
disk failure?


You CANNOT afford to be optimistic ... Murphy's law says you will lose 
the wrong second disk.


i see why gambling is not worth it here, but at
the same time, i see no reason to ignore reality
(that a 2 disk failure can be saved).


Don't ignore that some 2-disk failures CAN'T be saved ...


e.g. a 4-disk RAID10 with -layout=n2 gives

 1*4/10 + 2*4/10 = 1.2

expected recoverable disk failures.  details are
below:


now, if we do a 5-disk --layout=n2, we get:

 1(1)2(2)3
(3)4(4)5(5)
 6(6)7(7)8
(8)9(9)10   (10)
 11   (11)   12   (12)   13
(13) ...

obviously, there are 5 possible ways a single disk
may fail, out of which all of the 5 will be
recovered.


Don't forget a 4+spare layout, which *should* survive a 2-disk failure.


there are nchoosek(5,2) = 10 possible ways a 2
disk failure could happen, out of which 5
will be recovered:


so, by transforming a 4-disk RAID10 into a 5-disk
one, we increase total storage capacity by a 0.5
disk's worth of storage, while losing the ability
to recover 0.2 disks.

but if we extended the 4-disk RAID10 into a
6-disk --layout=n2, we will have:

  6  nchoosek(6,2) - 3
= 1 * -  +  2 * -
   6 + nchoosek(6,2) 6 + nchoosek(6,2)

= 6/21   +  2 * 12/15

= 1.8857 expected recoverable failing disks.

almost 2.  i.e. there is 80% chance of surviving a
2 disk failure.

so, i wonder, is it a bad decision to go with an
even number disks with a RAID10?  what is the
right way to think to find an answer to this
question?

i guess the ultimate answer needs knowledge of
these:

 * F1: probability of having 1 disks fail within
   the repair window.
 * F2: probability of having 2 disks fail within
   the repair window.
 * F3: probability of having 3 disks fail within
   .   the repair window.
   .
   .
 * Fn: probability of having n disks fail within
   the repair window.

 * R1: probability of surviving 1 disks failure.
   equals 1 with all related cases.
 * R2: probability of surviving 2 disks failure.
   equals 1/3 with 5-disk RAID10
   equals 0.8 with a 6-disk RAID10.
 * R3: probability of surviving 3 disks failure.
   equals 0 with all related cases.
   .
   .
   .
 * Rn: probability of surviving n disks failure.
   equals 0 with all related cases.

 * L : expected cost of losing data on an array.
 * D : price of a disk.


Don't forget, if you have a spare disk, the repair window is the length 
of time it takes to fail-over ...


this way, the absolute expected cost when adopting
a 6-disk RAID10 is:

= 6D + F1*(1-R1)*L + F2*(1-R2)*L + F3*(1-R3)*L + ...
= 6D + F1*(1-1)*L + F2*(1-0.8)*L + F3*(1-0)*L + ...
= 6D + 0  + F2*(0.2)*L   + F3*(1-0)*L + ...

and the absolute cost for a 5-disk RAID10 is:

= 5D + F1*(1-1)*L + F2*(1-0.)*L + F3*(1-0)*L + ...
= 5D + 0  + F2*(0.6667)*L   + F3*(1-0)*L + ...

canceling identical terms, the difference cost is:

6-disk ===> 6D + 0.2*F2*L
5-disk ===> 5D + 0.6667*F2*L

from here [1] we know that a 1TB disk costs
$35.85, so:

6-disk ===> 6*35.85 + 0.2*F2*L
5-disk ===> 5*35.85 + 0.6667*F2*L

now, at which point is a 5-disk array a better
economical decision than a 6-disk one?  for
simplicity, let LOL = F2*L:

5*35.85 + 0.6667 * LOL  <   6*35.85 + 0.2 * LOL
0.6667*LOL - 0.2 * LOL  <   6*35.85 - 5*35.85
LOL * (0.6667 - 0.2)<   6*35.85 - 5*35.85

 6*35.85 - 5*35.85
LOL  <   -
   0.6667 - 0.2

LOL  <   76.816
F2*L <   76.816

so, a 5-disk RAID10 is better than a 6-disk RAID10
only if:

 F2*L  <  76.816 bucks.

this site [2] says that 76% of seagate disks fail
per year (:D).  and since disks fail independent
of each other mostly, then, the probabilty of
having 2 disks fail in a year is:

76% seems incredibly high. And no, disks do not fail independently of 
each other. If you buy a bunch of identical disks, at the same time, and 
stick them all in the same raid array, the chances of them all wearing 
out at the same time are rather higher than random chance would suggest.


Which is why, if a raid disk fails, the advice is always to replace it 
asap. And if possible, to recover the failed drive to try and copy that 
rather than hammer the rest of the raid.


Bear in mind that, it doesn't matter how many drives a raid-10 has, if 
you're recovering on to a new drive, the data is stored on just two of 
the 

Re: [gentoo-user] which linux RAID setup to choose?

2020-05-03 Thread antlists

On 03/05/2020 21:07, Rich Freeman wrote:

I don't think you should focus so much on whether read=write in your
RAID.  I'd focus more on whether read and write both meet your
requirements.


If you think about it, it's obvious that raid-1 will read faster than it 
writes - it has to write two copies while it only reads one.


Likewise, raids 5 and 6 will be slower writing than reading - for a 
normal read it only reads the data disks, but when writing it has to 
write (and calculate!) parity as well.


A raid 1 should read data faster than a lone disk. A raid 5 or 6 should 
read noticeably faster because it's reading across more than one disk.


If you're worried about write speeds, add a cache.

Cheers,
Wol



Re: [gentoo-user] Python:2.7 and removing it early

2020-05-04 Thread antlists

On 04/05/2020 20:57, Dale wrote:

Alessandro Barbieri wrote:

At least
gimp-help
scribus
nut
fbpanel
are Python2 only, didn't check stuff from overlays



That makes sense.  I can see where some can work with old and new python
but some appeared to be still stuck on the old 2.7.  Guess I'll have to
wait since I use those.  Maybe they will be updated soon.

Another app that's 2.7 only is the current version of lilypond. The new 
dev version I think can run without python2, but certainly building the 
stable version demands it. I *think* if you get the pre-compiled binary 
the current version can run (crippled) without python2.


Cheers,
Wol



Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question

2020-05-10 Thread antlists

On 10/05/2020 20:11, Rich Freeman wrote:

I did find a WD Red 8TB drive.  It costs a good bit more.  It's a good
deal but still costs more.  I'm going to keep looking.  Eventually I'll
either spend the money on the drive or find a really good deal.  My home
directory is at 69% so I got some time left.  Of course, my collection
is still growing. o_O



In theory the 8TB reds are SMR-free.


I thought I first found it on this list - wasn't it reported that the 
1TB and 8TB were still CMR but everything between was now SMR? Pretty 
naff since the SMR drives all refuse to add to a raid array, despite 
being advertised as "for NAS and RAID". Under UK law that would be a 
slam dunk RMA as "unfit for purpose".


Try the "Red Pro", which apparently are still all CMR. To the best of my 
knowledge the Seagate Ironwolves are still SMR-free, and there's also 
the Ironwolf Pros.


I've got two Ironwolves, but they're 2018-vintage. I think they're Red 
equivalents.


Cheers,
Wol



Re: [gentoo-user] New system, systemd, and dm-integrity

2020-05-15 Thread antlists

On 15/05/2020 11:20, Neil Bothwick wrote:

On Fri, 15 May 2020 11:19:06 +0100, Neil Bothwick wrote:


How are you generating the initramfs? If you use dracut, there are
options you can add to it's config directory, such as install_items to
make sure your service files are included.



I presume I'll be using dracut ...


Or you can create a custom module, they are just shell scripts. I recall
reading a blog post by Rich on how to do this a few years ago.


My custom module calls a shell script, so it shouldn't be that hard from 
what you say. I then need to make sure the program it invokes 
(integritysetup) is in the initramfs?


Cheers,
Wol



Re: [gentoo-user] New system, systemd, and dm-integrity

2020-05-15 Thread antlists

On 15/05/2020 12:30, Rich Freeman wrote:

On Fri, May 15, 2020 at 7:16 AM antlists  wrote:


On 15/05/2020 11:20, Neil Bothwick wrote:


Or you can create a custom module, they are just shell scripts. I recall
reading a blog post by Rich on how to do this a few years ago.


My custom module calls a shell script, so it shouldn't be that hard from
what you say. I then need to make sure the program it invokes
(integritysetup) is in the initramfs?


The actual problem that this module solves is no-doubt long solved
upstream, but here is the blog post on dracut modules (which is fairly
well-documented in the official docs as well):
https://rich0gentoo.wordpress.com/2012/01/21/a-quick-dracut-module/


I don't think it is ... certainly I'm not aware of anything other than 
LUKS that uses dm-integrity, and LUKS sets it up itself.


Basically you have a shell script that tells dracut when building the
initramfs to include in it whatever you need.  Then you have the phase
hooks that actually run whatever you need to run at the appropriate
time during boot (presumably before the mdadm stuff runs).

My example doesn't install any external programs, but there is a
simple syntax for that.

If your module is reasonably generic you could probably get upstream
to merge it as well.


No. Like LUKS, I intend to merge the code into mdadm and let the raid 
side handle it. If mdadm detects a dm-integrity/raid setup, it'll set up 
dm-integrity and then recurse to set up raid.


Good luck with it, and I'm curious as to how you like this setup vs
something more "conventional" like zfs/btrfs.  I'm using single-volume
zfs for integrity for my lizardfs chunkservers and it strikes me that
maybe dm-integrity could accomplish the same goal with perhaps better
performance (and less kernel fuss).  I'm not sure I'd want to replace
more general-purpose zfs with this, though the flexibility of
lvm+mdadm is certainly attractive.

openSUSE is my only experience of btrfs. And it hasn't been nice. When 
it goes wrong it's nasty. Plus only raid 1 really works - I've heard 
that 5 and 6 have design flaws which means it will be very hard to get 
them to work properly. I've never met zfs.


As the linux raid wiki says (I wrote it :-) do you want the complexity 
of a "do it all" filesystem, or the abstraction of dedicated layers?


The big problem that md-raid has is that it has no way of detecting or 
dealing with corruption underneath. Hence me wanting to put dm-integrity 
underneath, because that's dedicated to detecting corruption. So if 
something goes wrong, the raid gets a read error and sorts it out.


Then lvm provides the snap-shotting and sort-of-backups etc.

But like all these things, it's learning that's the big problem. With my 
main system, I don't want to experiment. My first gentoo system was an 
Athlon K8 Thunderbird on ext. The next one is my current Athlon X III 
mirrored across two 3TB drives. Now I'm throwing dm-integrity and lvm 
into the mix with two 4TB drives. So I'm going to try and learn KVM ... :-)


Cheers,
Wol



Re: [gentoo-user] SDD strategies...

2020-03-18 Thread antlists

On 17/03/2020 11:54, madscientistatlarge wrote:

The issue is not usually end of trusted life, but rather random failure.  I've 
barely managed to recover failed hard drives, That is less likely on SSD though 
possibly less likely to happen.


The drive may be less likely to fail, but I'd say raid or backups are a 
necessity.


From what I've heard, SSDs tend to go read-only when they fail (that's 
fine), BUT SELF-DESTRUCT ON A POWER CYCLE!!!


So don't bank on being able to access a failed SSD after a reboot.

Cheers,
Wol



Re: [gentoo-user] Re: SDD strategies...

2020-03-18 Thread antlists

On 17/03/2020 14:29, Grant Edwards wrote:

On 2020-03-17, Neil Bothwick  wrote:


Same here. The main advantage of spinning HDs are that they are cheaper
to replace when they fail. I only use them when I need lots of space.


Me too. If I didn't have my desktop set up as a DVR with 5TB of
recording space, I wouldn't have any spinning drives at all.  My
personal experience so far indicates that SSDs are far more reliable
and long-lived than spinning HDs.  I would guess that about half of my
spinning HDs fail in under 5 years.  But then again, I tend to buy
pretty cheap models.

If you rely on raid, and use spinning rust, DON'T buy cheap drives. I 
like Seagate, and bought myself Barracudas. Big mistake. Next time 
round, I bought Ironwolves. Hopefully that system will soon be up and 
running, and I'll see whether that was a good choice :-)


Cheers,
Wol



Re: [gentoo-user] SDD strategies...

2020-03-18 Thread antlists

On 17/03/2020 05:59, tu...@posteo.de wrote:

Hi,

currentlu I am setting up a new PC for my 12-years old one,
which has reached the limits of its "computational power" :)

SSDs are a common replacement for HDs nowaday -- but I still trust my
HDs more than this "flashy" things...call me retro or oldschool, but
it my current "Bauchgefühl" (gut feeling).


Can't remember where it was - some mag ran a stress-test on a bunch of 
SSDs and they massively outlived their rated lives ... I think even the 
first to fail survived about 18months of continuous hammering - and I 
mean hammering!


To reduce write cycles to the SSD, which are quite a lot when using
UNIX/Limux (logging etc) and especially GENTOO (compiling sources
instead of using binary packages -- which is GOOD!), I am planning
the following setup:

The sustem will boot from SSD.

The HD will contain the whole system including the complete root
filesustem. Updateing, installing via Gentoo tools will run using
the HD. If that process has ended, I will rsync the HD based root
fileystem to the SSD.


Whatever for?


Folders, which will be written to by the sustem while running will
be symlinked to the HD.

This should work...?

Or is there another idea to setup a system which will benefit from
the advantages of a SSD by avoiding its disadvantages?


If you've got both an SSD and an HD, just use the HD for swap, /tmp, 
/var/tmp/portage (possibly the whole of /var/tmp), and any other area 
where you consider files to be temporary.


Background: I am normally using a PC a long time and try to avoid
buying things for reasons like being more modern or being newer.

Any idea to setup such a sustem is heardly welcone -- thank you
very much in advance!

Why waste time and effort for a complex setup when it's going to gain 
you bugger all.


The only thing I would really advise for is that (a) you think about 
some form of journalling - LVM or btrfs - for your root file-system to 
protect against a messed up upgrade - take a snapshot, upgrade, and if 
anything goes wrong it's an easy roll-back.


Likewise, do the same for the rotating rust, and use that to back up 
/home - you can use some option to rsync that only over-writes what's 
changed, so you do a "snapshot then back up" and have loads of backups 
going back however far ...


Cheers,
Wol



Re: [gentoo-user] Courier Sub-addressing

2020-05-22 Thread antlists

On 21/05/2020 21:14, Ashley Dixon wrote:

Hello,

I am attempting to set up sub-addressing on my  Courier  mail  server,  allowing
senders to directly deliver messages to a particular folder in my mailbox.   For
example, I want to provide my University with the address
`ash-academicmatt...@suugaku.co.uk` to force all their messages into the
"AcademicMatters" subdirectory.

Unfortunately,  I  can't  find  any  official  Courier  documentation  regarding
sub-addressing.  I have found [1], however I'm not sure it will apply  as  I  am
using virtual mailboxes.


If I understand what you are attempting correctly (not a given!) then 
what you are trying won't work. You're confusing multiple *folders* with 
multiple *users*.


I'm probably not describing this right, but let's say you've got a small 
business, with a POP3 email account of "busin...@isp.co.uk". However, 
you've set up a central server with each user having their own account 
eg John, Mary & Sue.


So you configure Sue's mail client to have an address of "Sue 
". Out in the internet, smtp servers look at the 
@isp.co.uk bit to deliver it to the right mailserver. Your ISP sees 
"sue+business", *ignores* the bit in front of the plus, and puts it in 
the "business" pop account. Your local mailserver now pulls down the 
email, ignores the bit *after* the +, and shoves it in Sue's email.


This is, I believe, an RFC so Courier is simply implementing the spec. 
That's probably why there is precious little Courier reference material, 
it assumes you have the RFC to hand ...


I don't know what happens with your "-" example, but it just looks wrong 
to me. It should be looking for an AcademicMatters POP account, and then 
delivering the mail to a user account called ash on the server called 
AcademicMatters. Internet email addresses and domains are read 
right-to-left (Janet used to be left-to-right, but the Americans won, as 
usual).


Cheers,
Wol



Re: [gentoo-user] Errors in nonexistent partitions

2020-09-14 Thread antlists

On 14/09/2020 08:48, Peter Humphrey wrote:

Just before this started, I booted Win-10 on /dev/sdb and ran its update
process. I don't use it for anything at the moment, just keeping it up to date
in case I ever do. I do this most weeks, but is it possible that Win-10
tampered in some way that it hasn't before? I'm seeing these errors on/dev/
sda (which does have an NTFS partition) and /dev/nvme0n1 (which does not), but
not on /dev/sdb.


I know Windows has hidden partitions and things, but it shouldn't be 
tampering with the partition table. What sector does sda1 start on? It 
should be something like 2048. I don't play with that enough to really 
know what's going on, but if that number is single digits then that 
could be the problem ...


Cheers,
Wol



Re: [gentoo-user] Errors in nonexistent partitions

2020-09-13 Thread antlists

On 13/09/2020 11:17, Peter Humphrey wrote:

Morning all,

My ~amd64 system uses partitions 1 to 18 on /dev/nvme0n1, and it has two SATA
disks as well, for various purposes. Today, after I'd taken the system down
for its weekly backup (I tar all the partitions to a USB disk) and started up
again, invoking gparted to look around, libparted spat out a list of
partitions from 19 to 128 which, it said, "have been written but we have been
unable to inform the kernel of the change..."

I remerged gparted, parted, libparted and udisks, then booted another system
and ran fsck -f on all the partitions from 4 to 18 - those that this system
uses - and rebooted. No change - the same complaint from libparted.

I get a similar complaint about /dev/sda.

Those errors are repeated once.

Is this a terminal condition? I could repartition and restore from backup, but
I hope someone can offer a clue before I resort to that.

You're using the wrong tool to try and fix it. There's clearly something 
wrong with your partition TABLE, and you're using a tool that fixes the 
partition CONTENTS.


Use gparted (or gdisk) on the DISK, and that should sort things out. 
Check whether it thinks those partitions exist or not, and then get it 
to write a new partition table to clean things up.


Cheers,
Wol



Re: [gentoo-user] Re: How to switch from rust to rust-bin?

2020-09-09 Thread antlists

On 09/09/2020 00:55, Neil Bothwick wrote:

That's the sort of thing I was thinking of. It could cause issues for
those that run emerge without checking what it is going to do, but they
are going to hit problems sometime anyway so they may as well learn their
lesson sooner;-)

Hmm, I hadn't realised there were so many virtuals, I just looked and
saw 186 of them.


Bear in mind --ask is interactive by its very nature, maybe it should 
ask "3 packages satisfy this virtual, please select ..."


Defaulting to package 1, of course.

Cheers,
Wol



Re: [gentoo-user] [OT] rsync rules question

2020-10-14 Thread antlists

On 14/10/2020 20:22, Neil Bothwick wrote:

Alternatively, for more instant updates, you could look at
net-p2p/syncthing.


Not sure what you use for it, but there are mirrored filesystems that 
run over a network. So the two systems will update in sync if they're 
both switched on, or will defer updates and sync as soon as the second 
system powers up.


Cheers,
Wol



Re: [gentoo-user] Re: tried desktop profile

2020-10-14 Thread antlists

On 14/10/2020 19:58, Grant Edwards wrote:

On 2020-10-14, antlists  wrote:


Does your mobo support NVMe drives? Just be aware my mobo is crap in
that it says it supports two graphics cards, NVMe, etc, but if you stick
an NVMe in the second graphics card is disabled, or if you use both the
NVMe slots you lose a couple of SATA ports, or whatever. Bit of a PoS in
that regard.


I think that's pretty common. NVMe uses PCI-express channels that are
often shared with one of the PCI-express "slots" on the motherboard.
As a result you can't use both at the same time.


Is that the sign of a cheap/rubbishy mobo?

Which is why I'm somewhat pissed off with that motherboard and the shop...

I took in a home-built system - with a £90 Gigabyte mobo - and said "it 
won't boot, I suspect the BIOS needs upgrading". When I went to fetch 
it, they said "it won't boot, the mobo's duff, so we've replaced it". 
They gave me the old mobo back, which I discovered was still under 
warranty so I sent it off ... came back "nothing wrong, we've updated 
the BIOS and it works fine" !!!


Bearing in mind (a) I need SATA ports for RAID, and (b) I want to run 
double-headed so I need two graphics cards, I'm well pissed that they 
charged me about £150 for the mobo (because they had difficulty finding 
one with loads of SATA ports), but as soon as I stick the second 
graphics card in some of the SATA ports stop working!


As far as I can tell the original Gigabyte has sufficient channels to 
drive everything no problem! I won't be using THAT shop again ...


Still. The old mobo will be going in a new case (it wouldn't fit it my 
spare) which will be a Coolermaster with room for plenty of 3.5 drives. 
That'll become my testbed/dev machine, which will probably "break" at 
regular intervals as it stress-tests the raid code.


I've bought a 2/4 port add-in SATA card for the system that's going to 
be our main desktop. 2/4 because it's only got two SATA channels, but 
they can go to either a SATA or eSATA port.


Cheers,
Wol



Re: [gentoo-user] tried desktop profile

2020-10-13 Thread antlists

On 13/10/2020 17:52, Jude DaShiell wrote:

Let's see if this is a correct bottom post.  I've never seen anything in
this life.  Eyes never developed enough for me to see.


Perfect, great!

Cheers,
Wol



Re: [gentoo-user] Re: tried desktop profile

2020-10-15 Thread antlists

On 15/10/2020 17:31, Jack wrote:

On 2020.10.15 00:03, Wols Lists wrote:

On 14/10/20 22:37, Jack wrote:
Why do you need two graphics cards?  I've been driving two monitors 
off each of the last several graphics cards I've used - both nVidia 
and ATI, from simple PCI to PCIE needing the extra power connector.


Because I'm not driving two monitors. I'm running a two-user system. 
AIUI, each session needs its own graphics card (which could drive two 
monitors for that user).


Now that would be an interesting project - a single instance of X, but 
split between two different users, each on a separate monitor.  I agree 
it's not feasible now, as xorg runs as the owning user.  I would expect 
Wayland has the same restriction.  Actually, I would find it interesting 
to have a separate session on each monitor just in plain text mode.


Was the second graphics card you tried using also PCIE?  If so, I wonder 
if you might get it to work with a PCI card as the second graphics card.


I haven't done it yet, it's the new system I'm building. And it's going 
to be an interesting experience :-)


But from everything I've found, it seems like there's no problem binding 
one instance of X to one card/keyboard/mouse, and a different instance 
to the other card/keyboard/mouse (or Wayland).


Cheers,
Wol



Re: [gentoo-user] Re: tried desktop profile

2020-10-15 Thread antlists

On 15/10/2020 21:07, Jack wrote:
However, and I have no experience here, although some systems can boot 
in BIOS mode to a GPT partitioned disk, there may be extra hoops to jump 
through.  I'm sure others will chime in with additional information.


I don't think my current system has EFI, and it boots quite happily from 
a GPT-formatted 3TB disk. In case it matters, however, the last 2TB are 
my /home partition, so everything else - /, swap, grub etc are all in 
the first TB.


Bear in mind that, in BIOS mode, grub installs itself in the "empty" 
first 2048 sectors.


Cheers,
Wol



Re: [gentoo-user] RAID: new drive on aac raid

2020-10-06 Thread antlists

On 05/10/2020 17:01, Stefan G. Weichinger wrote:

Am 05.10.20 um 17:19 schrieb Stefan G. Weichinger:


So my issue seems to be: non-working arcconf doesn't let me "enable"
that one drive.


Some kind of progress.

Searched for more and older releases of arcconf, found Version 1.2 that
doesn't crash here.

This lets me view the physical device(s), but the new disk is marked as
"Failed".

Does it think the disk is a negative size? I looked, your Tosh is 2TB, 
and the other I looked at was 700GB. The raid website says a lot of 
older controllers can't cope with 2TB or larger disks ...


Actually, the device information seems to confirm that - Total Size 0 MB ???


# ./arcconf GETCONFIG 1 PD  | more
Controllers found: 1
--
Physical Device information
--
   Device #0
  Device is a Hard drive
  State  : Failed
  Block Size : Unknown
  Supported  : Yes
  Transfer Speed : Failed
  Reported Channel,Device(T:L)   : 0,0(0:0)
  Reported Location  : Connector 0, Device 0
  Vendor : TOSHIBA
  Model  : MG04SCA20EE
  Firmware   : 0104
  Serial number  : 30A0A00UFX2B
  World-wide name: 539A08327484
  Total Size : 0 MB
  Write Cache: Unknown
  FRU: None
  S.M.A.R.T. : No
  S.M.A.R.T. warnings: 0
  SSD: No






Re: [gentoo-user] dhcpd versus fixed IP addresses

2020-10-04 Thread antlists

On 04/10/2020 22:17, Michael wrote:

Random (guest) devices connected to the router will still be allocated
dynamically some IP address by its dhcp server, typically starting from 2 and
incremented from there.  Since most of your devices IP addresses start from
the top it's unlikely there'll be clash, because any dynamically allocated IP
address leases will soon expire.


Some routers can be told "only allocate these addresses" such as 1-100. 
That way you've reserved a block (101-254) that you can assign 
statically knowing the router won't collide with them.


Cheers,
Wol



Re: [gentoo-user] What happened to my emerge -u?

2020-10-11 Thread antlists

On 11/10/2020 21:55, n952162 wrote:


I ran into this:

https://forums.gentoo.org/viewtopic-t-1108636-start-0.html

but I really don't understand anything about these alternative 
instruction sets and would think my CPU is pretty vanilla.


I mean, I hope to avoid trail-and-error approaches to solving this ...


My first reaction was "have you specified x86 in make.conf?". As others 
have said, you need to make sure it's either x86_64, or AMD64, whatever 
is appropriate.


I think SSE2 first appeared about the end of the Pentium era (if not 
earlier), so any half-way-modern cpu will have it.


Dunno whether these musings will be any help, but we'll see ...

Cheers,
Wol



Re: [gentoo-user] Errors in nonexistent partitions - FIXED

2020-10-19 Thread antlists

On 19/10/2020 12:33, Peter Humphrey wrote:

Mystery solved. It was a disk failure: a 256GB NVMe drive. It was 4.5 years
old, which doesn't seem a long life to me.


Doesn't sound old, but if it breaks in the fault-tolerance-management 
area, then you're stuffed. Bit like old MFM (pre-IDE) drives had the 
bad-block-management area, and if the first (boot/partition) sector 
went, or said bad-block area, the drive was scrap.


Cheers,
Wol



Re: [gentoo-user]

2020-10-09 Thread antlists

On 09/10/2020 20:19, Jude DaShiell wrote:

available profiles
eselect profile list
returns three profiles I might use all in the 17.0 version number.
stable, desktop, and desktop-gnome.  I suspect stable would return a
console environment, desktop-gnome would get a gnome desktop, but how is
desktop different from desktop-gnome?
Maybe even better could eselect give this information if given the correct
parameters?


Well, Gnome is not the only desktop ...

I'm not sure what profiles are there, but I'd use desktop-kde-systemd, 
and add -gnome -gtk to my USE list. I can't stand gnome.


Cheers,
Wol



Re: [gentoo-user] [OT] rsync rules question

2020-10-14 Thread antlists

On 14/10/2020 14:58, Walter Dnes wrote:

  I'd like to keep my "hot backup" machine more up-to-date.  The most
important directory is my home directory.  So far, I've been doing
occasional tarballs of my home directory and pushing them over.  I
exclude some directories for "reasons".  I'd like to switch to rsync
and run it more often.  I've done the RTFM, but I'd more eyes to check
this out before the first run.


I haven't yet set up my system, but a couple of tweaks I'm planning to 
add ...


Either use btrfs, or an lvm partition, so you can take snapshots.

Do an in-place rsync (ie if part of a file has changed, it only writes 
the bit that's changed).


That way, you get a full backup for the price of an incremental.

The main change from your approach is that this will keep both an old 
and new copy of any file that has changed.


The big change you could make from your approach is that you CAN delete 
files from the backup if they've been deleted on the live system, 
without losing them.


Horses for courses, but if you are planning to keep your backup 
long-term, this could do a good job, provided you remember when you 
deleted that lost file from your live system :-)


I'm planning to back up 6TB of live data onto a 12TB shingled disk, 
which shouldn't be a problem, and given that not much changes (apart 
from adding new photos), each backup will probably use a few gigs at 
most. Dunno how long the disk will last, but it should be ages.


Cheers,
Wol



Re: [gentoo-user] tried desktop profile

2020-10-14 Thread antlists

On 14/10/2020 18:38, Jude DaShiell wrote:

Let's see, yes that was a typo, those ssd disks are each 120GB and
unfortunately the Alien ATX case used only has room for one of them.
However external ssd drives are on the market.


Does your mobo support NVMe drives? Just be aware my mobo is crap in 
that it says it supports two graphics cards, NVMe, etc, but if you stick 
an NVMe in the second graphics card is disabled, or if you use both the 
NVMe slots you lose a couple of SATA ports, or whatever. Bit of a PoS in 
that regard.


The other possibility is do you have spare 3.5 drive bays? You can get a 
converter to put 2.5 drives in them.


And can you stick an expansion card in? You might want to buy a card 
with eSATA ports.


Cheers,
Wol



Re: [gentoo-user]

2020-08-17 Thread antlists

On 17/08/2020 13:11, Dale wrote:

We put all the keywords in emails with white letters and a white background.


And how do I do that when I use plain latin-1 (or unicode) emails ... ?

Cheers,
Wol



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-28 Thread antlists

On 26/08/2020 19:51, Grant Taylor wrote:


Just because it's possible to force something to use HTTP(S) does not 
mean that it's a good idea to do so.


The main reason other applications use "TCP over HTTP(S)" is because 
stupid network operators block everything else!


Cheers,
Wol



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-28 Thread antlists

On 26/08/2020 18:40, Grant Taylor wrote:

On 8/21/20 10:15 PM, Caveman Al Toraboran wrote:
just to double check i got you right.  due to flushing the buffer to 
disk, this would mean that mail's throughput is limited by disk i/o?


Yes.

This speed limitation is viewed as a necessary limitation for the safety 
of email passing through the system.


Nothing states that it must be a single disk (block device).  It's 
entirely possible that a fancy MTA can rotate through many disks (block 
devices), using a different one for each SMTP connection.  Thus in 
theory allowing some to operate in close lock step with each other 
without depending on / being blocked by any given disk (block device).


Thank you for the detailed explanation Ashley.


Or think back to the old days - network was slow and disks were 
relatively fast. The disk was more than capable of keeping up with the 
network, and simple winchesters didn't lie about saving to the rotating 
rust ...


As Ashley explained, some MTAs trust the kernel.  I've heard of others 
issuing a sync after the write.  But that is up to each MTA's 
developers.  They have all taken reasonable steps to ensure the safety 
of email.  Some have taken more-than-reasonable steps.


Depends on the filesystem. "sync after write" was an EXTREMELY daft idea 
on ext4 in the early days - that would really have killed system response.


Nowadays you could stick an SSD cache in front of a raid array ...

Cheers,
Wol



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-28 Thread antlists

On 26/08/2020 21:21, Grant Taylor wrote:
so basically total expected number of protocols/layers used in the 
universe, per second, will be much less if we, on planet earth, use a 
mail system that uses HTTP* instead of RESXCH_*.


I obviously disagree.


Exactly. You now need a protocol/layer that says you're running "mail 
over http" as opposed to "web". HTTP is tcp/80 that *means* web. As soon 
as you start using it for something (anything) else you've just added 
another protocol/layer.


I get the distinct impression that Grant doesn't actually understand 
what TCP is ... (hint - it has port numbers that are meant (if they're 
not abused) to indicate what is going over the connection, like SMTP, or 
HTTP, or POP, or IMAP, etc etc).


Cheers,
Wol



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-28 Thread antlists

On 28/08/2020 20:34, J. Roeleveld wrote:

Cheers,
Wol

I think you meant that Caveman doesn't understand what TCP (and UDP) actually 
is.

Grant does seem to know what he is talking about.


Sorry yes I did. I got rather confused ... not surprising really :-)

Cheers,
Wol



Re: [gentoo-user] Gentoo Council vs Umbrella Corp ?

2020-08-28 Thread antlists

On 28/08/2020 19:10, james wrote:


A council member, from say England, could manage how 1/2 of what they 
raise is spent. It could even "english centric" but must comply with USA 
IRS standards. Our council could be expanded to many members, from other 
countries, with a centic goal of spending Gentoo funds


WHY? As a Brit I wouldn't want to touch the Americal Legal System with a 
barge pole!!!


Truly, there is no other globally recognized tax system
like the USA-IRS (bad ass && world class open). That's why in times of 
trouble, entrepreneurs world wide flock to the "dollar". Also, being in 
elite standing with the USA-IRS opens many door doors to enhance and 
promote and deploy GENTOO globally.


And as a Brit, while HMRC may be a pain, I have precious little to do 
with them. My employer deals with them, and at the end of the year I 
occasionally get a letter saying I've overpaid my taxes and here's some 
money back. Do I REALLY want to get involved with some foreign system 
that's WAY more complicated?


Get rid of your rose-tinted spectacles. For the MAJORITY of Brits, our 
tax system is way less complicated than yours. You'd be better off 
moving the foundation to somewhere that doesn't have your insane mix of 
state and federal taxes, and doesn't offload the responsibility onto 
people who don't understand the system.


Cheers,
Wol



Re: [gentoo-user] Time to switch back to AMD?

2020-08-19 Thread antlists

On 19/08/2020 04:44, Grant Edwards wrote:

How are the AMD "Wraith Stealth" fans?  I've been using the fan that
came with the old Core-i3, and it gets a little annoying when it's
time to compile chromium (or when flying planes/helicopters).


If you're talking about what I think you are, I'm building a new system 
and you can't hear the the thing. I did an emerge -e world, and never 
noticed it ...


Cheers,
Wol



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-20 Thread antlists

On 19/08/2020 16:19, Caveman Al Toraboran wrote:

‐‐‐ Original Message ‐‐‐
On Wednesday, August 19, 2020 7:10 PM, Grant Taylor 
 wrote:


Per protocol specification, SMTP is EXTREMELY robust.

It will retry delivery, nominally once an hour, for up to five (or
seven) days. That's 120-168 delivery attempts.

Further, SMTP implementations MUST (RFC sense of the word) deliver a
notification back to the sender if the implementation was unable to
delivery a message.


this queue re-transmission, and failure
notification, can be done with a small python
script.


Will that python script allow for the situation that the message is 
received, but the message was NOT safely stored for onwards transmission 
before the receiver crashed, and as such the message has not been 
SUCCESSFULLY received?


SMTP has lots of things specifically meant to ensure messages survive 
the internet jungle on their journey ...


Cheers,
Wol



Re: [gentoo-user] Getting past captchas with vision issues.

2020-09-29 Thread antlists

On 29/09/2020 09:02, Dale wrote:
If Seamonkey dies, I have no idea what I'm going to do for emails.  I 
got emails going back to when I first started using the internet.  Heck, 
I may have the first email I ever sent.  o_O  Whatever I use, I hope I 
can import or just copy them over.  Otherwise, I'm going to be unhappy.  
Makes me think back to the hal days.  O_O


iirc, Thunderbird (and I guess Seamonkey) use mbox format as their 
internal format. Pretty much all Unix mail clients use mbox, so 
salvaging your old email shouldn't be a problem. It's nasty things like 
Outlook where I've lost a large stash of old emails :-(


The other thing is, I've got Courier-Imap on my system, so it doesn't 
mattter which client I use, so long as it supports imap. My emails 
aren't stored in Thunderbird ... (makes it nice, because that's on my 
main system, and all my laptops point to it so mail is always sync'd)


Cheers,
Wol



Re: [gentoo-user] Re: Is there any Gentoo User webinar? or something like that?

2020-09-30 Thread antlists

On 30/09/2020 15:57, Grant Edwards wrote:

If you wait many months between updates, things tend to get more
difficult, and you may have to futz with things to get updated
(e.g. remove a package or two, update, then reinstall the removed
packages). It's sometimes not obvious how to proceed.


I don't think my main system has been updated for several years ... (but 
that's because the old system that worked was decommissioned, and the 
main system was pretty much guaranteed to crash when emerging...). So 
I'm now building a new machine from scratch.


That said, if I haven't updated in a while I use one of two tricks ... 
(1) when "emerge world" falls over I just emerge everything explicitly 
that looks like it will update. After a couple of attempts, world often 
runs to completion.
(2) if stuff won't emerge (and it doesn't look something critical) I 
just "emerge -C1". As soon as I've removed enough to fix the blockage, 
whatever I've deleted will just re-appear.


Cheers,
Wol



Re: [gentoo-user] Re: Gentoo RPi boot to ram or read-only FS?

2020-05-27 Thread antlists

On 27/05/2020 01:44, William Kenworthy wrote:

I have a few different pi's and similar Odroid arm systems running
Gentoo on sdcards - the failure rate is a real and constant problem (and
seems worse on pi's no matter what brand/type of sdcard so keep an up to
date spare+backups) and I am thinking of doing a disk-less NFS using the
a minimal sdcard image. Has advantages in centralised management and
using small cheap sdcards with possibly better performance.


Hmmm...

The trouble from my point of view is it seems micro-SDs are unreliable. 
I've never had a full-size SD card fail on me, but I've binned several 
of the micro version. But apart from big hefty DSLRs, not much takes the 
full-size cards any more ...


Cheers,
Wol



Re: [gentoo-user] Re: Gentoo RPi boot to ram or read-only FS?

2020-05-26 Thread antlists

On 26/05/2020 19:27, Neil Bothwick wrote:

On Tue, 26 May 2020 19:14:18 +0100, antlists wrote:


That's the Gentoo version that I'm using. But I'm looking for a way
to make it bullet-proof to having the plug pulled.



Don't use an SD card? Seriously, pulling the power on an SD card has
been known to corrupt it beyond recovery. BUT.



Mounting the card with sync will significantly reduce the likelihood of
corruption, at a cost of reduced life.


Well, compared to a dead card, a reduced life is a small price to pay :-)

I think you're talking about a corrupted filesystem, I'm talking about a 
corrupt/dead card ...


Cheers,
Wol



Re: [gentoo-user] Re: Gentoo RPi boot to ram or read-only FS?

2020-05-26 Thread antlists

On 26/05/2020 19:27, Neil Bothwick wrote:

This will mitigate the reduced life as you are hardly writing to the
card. Booting from a read-only / has caused problems for me in the past,
because of the inability to write to /etc.


Well, if we can get a loopback into the boot sequence before you write 
to /etc (why did it want to write to it?), then it won't realise that it 
can't. You just have to accept that all writes will get lost on power-down.


Cheers,
Wol



Re: [gentoo-user] Re: Gentoo RPi boot to ram or read-only FS?

2020-05-26 Thread antlists

On 26/05/2020 18:28, Frank Tarczynski wrote:
That's the Gentoo version that I'm using. But I'm looking for a way to 
make it bullet-proof to having the plug pulled.


Don't use an SD card? Seriously, pulling the power on an SD card has 
been known to corrupt it beyond recovery. BUT.


Is the big worry that the home directory will get corrupted etc etc? I 
don't know if you can partition an SD card, but look at doing a 
kiosk-style install with the OS protected and read-only. Then look at 
sticking a loopback device on top of home, so that any changes exist 
only in ram, and are lost on shutdown. Hopefully, that means you now 
have a system that can boot and run off a write-protected SD card :-)


Look at the raid wiki site

https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn

and especially the stuff on recovering a damaged raid for info about how 
to set up loopback.


Cheers,
Wol



Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question

2020-05-22 Thread antlists

On 22/05/2020 16:43, Rich Freeman wrote:

On Fri, May 22, 2020 at 11:32 AM Michael  wrote:


An interesting article mentioning WD Red NAS drives which may actually be SMRs
and how latency increases when cached writes need to be transferred into SMR
blocks.


Yeah, there is a lot of background on this stuff.

You should view a drive-managed SMR drive as basically a journaled
filesystem/database masquerading as a virtual drive.  One where the
keys/filenames are LBAs, and all the files are 512 bytes long.  :)

Really even most spinning drives are this way due to the 4k physical
sectors, but this is something much easier to deal with and handled by
the OS with aligned writes as much as possible.  SSDs have similar
issues but again the impact isn't nearly as bad and is more easily
managed by the OS with TRIM/etc.

A host-managed SMR drive operates much more like a physical drive, but
in this case the OS/application needs to be SMR-aware for performance
not to be absolutely terrible.

What puzzles me (or rather, it doesn't, it's just cost cutting), is why 
you need a *dedicated* cache zone anyway.


Stick a left-shift register between the LBA track and the hard drive, 
and by switching this on you write to tracks 2,4,6,8,10... and it's a 
CMR zone. Switch the register off and it's an SMR zone writing to all 
tracks.


The other thing is, why can't you just stream writes to a SMR zone, 
especially if we try and localise writes so lets say all LBAs in Gig 1 
go to the same zone ... okay - if we run out of zones to re-shingle to, 
then the drive is going to grind to a halt, but it will be much less 
likely to crash into that barrier in the first place.


Even better, if we have two independent heads, we could presumably 
stream updates using one head, and re-shingle with the other. But that's 
more cost ...


Cheers,
Wol



Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question

2020-05-22 Thread antlists

On 22/05/2020 18:20, Rich Freeman wrote:

On Fri, May 22, 2020 at 12:47 PM antlists  wrote:


What puzzles me (or rather, it doesn't, it's just cost cutting), is why
you need a *dedicated* cache zone anyway.

Stick a left-shift register between the LBA track and the hard drive,
and by switching this on you write to tracks 2,4,6,8,10... and it's a
CMR zone. Switch the register off and it's an SMR zone writing to all
tracks.


Disclaimer: I'm not a filesystem/DB design expert.

Well, I'm sure the zones aren't just 2 tracks wide, but that is worked
around easily enough.  I don't see what this gets you though.  If
you're doing sequential writes you can do them anywhere as long as
you're doing them sequentially within any particular SMR zone.  If
you're overwriting data then it doesn't matter how you've mapped them
with a static mapping like this, you're still going to end up with
writes landing in the middle of an SMR zone.


Let's assume each shingled track overwrites half the previous write. 
Let's also assume a shingled zone is 2GB in size. My method converts 
that into a 1GB CMR zone, because we're only writing to every second track.


I don't know how these drives cache their writes before re-organising, 
but this means that ANY disk zone can be used as cache, rather than 
having a (too small?) dedicated zone...


So what you could do is allocate one zone of CMR to every four or five 
zones of SMR and just reshingle each SMR as the CMR filled up. The 
important point is that zones can switch from CMR cache to SMR filling 
up, to full SMR zones decaying as they are re-written.



The other thing is, why can't you just stream writes to a SMR zone,
especially if we try and localise writes so lets say all LBAs in Gig 1
go to the same zone ... okay - if we run out of zones to re-shingle to,
then the drive is going to grind to a halt, but it will be much less
likely to crash into that barrier in the first place.


I'm not 100% following you, but if you're suggesting remapping all
blocks so that all writes are always sequential, like some kind of
log-based filesystem, your biggest problem here is going to be
metadata.  Blocks logically are only 512 bytes, so there are a LOT of
them.  You can't just freely remap them all because then you're going
to end up with more metadata than data.

I'm sure they are doing something like that within the cache area,
which is fine for short bursts of writes, but at some point you need
to restructure that data so that blocks are contiguous or otherwise
following some kind of pattern so that you don't have to literally
remap every single block. 


Which is why I'd break it down to maybe 2GB zones. If as the zone fills 
it streams, but is then re-organised and re-written properly when time 
permits, you've not got too large chunks of metadata. You need a btree 
to work out where each zone is stored, then each one has a btree to say 
where the blocks is stored. Oh - and these drives are probably 4K blocks 
only - most new drives are.



Now, they could still reside in different
locations, so maybe some sequential group of blocks are remapped, but
if you have a write to one block in the middle of a group you need to
still read/rewrite all those blocks somewhere.  Maybe you could use a
COW-like mechanism like zfs to reduce this somewhat, but you still
need to manage blocks in larger groups so that you don't have a ton of
metadata.


The problem with drives at the moment is they run out of CMR cache, so 
they have to rewrite all those blocks WHILE THE USER IS STILL WRITING. 
The point of my idea is that they can repurpose disk as SMR or CMR as 
required, so they don't run out of cache at the wrong time ...


Yes metadata may bloom under pressure, but give the drives a break and 
they can grab a new zone, do an SMR ordered stream, and shrink the metadata.


With host-managed SMR this is much less of a problem because the host
can use extents/etc to reduce the metadata, because the host already
needs to map all this stuff into larger structures like
files/records/etc.  The host is already trying to avoid having to
track individual blocks, so it is counterproductive to re-introduce
that problem at the block layer.

Really the simplest host-managed SMR solution is something like f2fs
or some other log-based filesystem that ensures all writes to the disk
are sequential.  Downside to flash-based filesystems is that they can
disregard fragmentation on flash, but you can't disregard that for an
SMR drive because random disk performance is terrible.


Which is why you have small(ish) zones so logically close writes are 
hopefully physically close as well ...



Even better, if we have two independent heads, we could presumably
stream updates using one head, and re-shingle with the other. But that's
more cost ...


Well, sure, or if you're doing things host-managed then you stick the
journal on an SSD and then do the writes to the SMR drive
opportunistically.  You're basically describing

Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question

2020-05-22 Thread antlists

On 22/05/2020 19:23, Rich Freeman wrote:

A big problem with drive-managed SMR is that it basically has to
assume the OS is dumb, which means most writes are in-place with no
trims, assuming the drive even supports trim.


I think the problem with the current WD Reds is, in part, that the ATA-4 
spec is required to support trim, but the ATA-3 spec is the current 
version. Whoops ...


So yes, even if the drive does support trim, it has no way of telling 
the OS that it does ...


Cheers,
Wol



Re: [gentoo-user] Local mail server

2020-07-19 Thread antlists

On 19/07/2020 15:18, Peter Humphrey wrote:

So I'm asking what systems other people use. I can't be unusual in what I
want, so there must be lots of solutions out there somewhere. Would anyone
like to offer me some advice?


Doing my best to remember my setup ...

Running postfix as my mail server. I never managed to get it working to 
SEND email, so clients had to be configured to send straight to my ISP. 
Don't send to google - it rewrites the headers ...


Used fetchmail to download, until an upgrade/fix/something broke MySQL 
so all my virtual email addresses broke.


Use Courier-IMAP to provide access from clients to the mail store.

I *think* that's all, but I dunno how long my system has been running 
(it hasn't even been updated for a couple of years :-( and apart from 
that MySQL problem it's been running untouched pretty much from day 1.


Cheers,
Wol



Re: [gentoo-user] Re: Local mail server

2020-08-01 Thread antlists

On 01/08/2020 19:48, Grant Taylor wrote:

On 7/31/20 2:05 PM, Grant Edwards wrote:
Nit: DHCPv6 can be (and usually is) dynamic, but it doesn't have to 
be. It's entirely possible to have a static IP address that your OS 
(or firewall/router) acquires via DHCPv6 (or v4).  [I set up stuff 
like that all the time.]


Counter Nit:  That's still acquiring an address via /Dynamic/ Host 
Configuration Protocol (v6).  It /is/ a /dynamic/ process.


Static IP address has some very specific meaning when it comes to 
configuring TCP/IP stacks.  Specifically that you enter the address to 
be used, and it doesn't change until someone changes it in the 
configuration.


Either an IP address is statically entered -or- it's dynamic.

The fact that it's returning the same, possibly predictable, address is 
independent of the fact that it's a /dynamic/ process.


Counter counter nit: You may be *acquiring* it dynamically, but you can 
enter the address to be used into DHCP, and then it doesn't change until 
someone changes it in the configuration.


That was my IPv4 in the Demon days - DHCP was *guaranteed* to *always* 
return the same address. So either I retrieved it via DHCP from Demon, 
or I hard coded it into my computer, it didn't matter.


Cheers,
Wol



Re: [gentoo-user] Re: Local mail server

2020-08-01 Thread antlists

On 01/08/2020 19:52, Grant Taylor wrote:

On 7/31/20 2:01 PM, Grant Edwards wrote:
There may be half way decent ISPs in the US, but I haven't seen one in 
over 20 years since the last one I was aware of stopped dealing with 
residential customers.  They were a victem of the "race to the bottom" 
when not enough residential customers were willing to pay $10 per 
month over what Comcast or US-West was charging for half-assed, 
crippled internet access).


I think there is probably a good correlation between size and desire to 
be good and provide service.


I've found that smaller ISPs (who actually try as opposed to cheating 
people) tend to be better.  Sadly, many of these Mom & Pop type ISPs 
were consumed during the aptly described race to the bottom.


:-(

I still do consulting work with a small M ISP in my home town and I 
have a small municipal ISP where I am now.  Both are quite good in many 
regards.  Unfortunately, neither of them offer IPv6.


That's one of the good things about the UK scene. In theory, and mostly 
in practice, the infrastructure (ie copper, fibre) is provided by a 
company which is not allowed to provide the service over it, so a 
mom-n-pop ISP can supposedly rent the link just as easily as a big ISP.


When we move I'll almost certainly move to Andrews and Arnold, who are 
exactly that mom-n-pop setup that are run by a bunch of engineers, as 
opposed to accountants.


Cheers,
Wol



Re: [gentoo-user] hplip network scanning port

2020-08-04 Thread antlists

On 01/08/2020 03:03, Adam Carter wrote:
I used to be able to scan on my gentoo box from an HP officejet pro on 
the network. This is now failing and i can see that the gentoo box is 
attempting to connect to TCP/6566 on the HP, but the HP is not listening 
on that port.


Test command is;
hp-scan -dhpaio:/net/HP_Officejet_Pro_8620?ip=

Is 6566 scan attempt using the correct port?

Have you accidentally closed the port on the scanner (not sure whether 
that's possible, but ...)


I use "scan to network" from the printer, but that may not be possible 
on yours. Just open a samba port and the scanner saves the file there. 
But that isn't foolproof either - when I changed scanner (from Dell to 
HP) one user stopped working ... ??


Cheers,
Wol



Re: [gentoo-user] Local mail server

2020-07-30 Thread antlists

On 30/07/2020 00:23, james wrote:

Very, Very interested in this thread.

Another quesiton. If you have (2) blocks of IP6 address,
can you use BGP4 (RFC 1771, 4271, 4632, 5678,5936 6198 etc ) and other 
RFC based standards  to manage routing and such multipath needs? Who 
enforces what carriers do with networking. Here in the US, I'm pretty 
sure it's just up to the the 
Carrier/ISP/bypass_Carrier/backhaul-transport company)


Conglomerates with IP resources, pretty much do what they want, and they 
are killing the standards based networking. If I'm incorrect, please 
educated me, as I have not kept up in this space, since selling my ISP 
more than (2) decades ago. The trump-china disputes are only 
accelerating open standards for communications systems, including all 
things TCP/IP.


From what little I understand, IPv6 *enforces* CIDR. So, of the 64 
network bits, maybe the first 16 bits are allocated to each high level 
allocator eg RIPE, ARIN etc. An ISP will then be allocated the next 16 
bits, giving them a 32-bit address space to allocate to their customers 
- each ISP will have an address space the size of IPv4?!


Each customer is then given one of these 64-bit address spaces for their 
local network. So routing tables suddenly become extremely simple - 
eactly the way IPv4 was intended to be.


This may then mean that dynDNS is part of (needs to be) the IPv6 spec, 
because every time a client roams between networks, its IPv6 address HAS 
to change.


I need to research more :-)

Cheers,
Wol



Re: [gentoo-user] can't mount raid0

2020-08-12 Thread antlists

On 12/08/2020 20:28, Никита Степанов wrote:

livecd gentoo # mount /dev/md1 /mnt/gentoo
mount: unknown filesystem type 'linux_raid_member'
what to do?


cat /proc/mdstat ?

Cheers,
Wol



Re: [gentoo-user] which filesystem is best for raid 0?

2020-08-12 Thread antlists

On 12/08/2020 18:53, Никита Степанов wrote:

which filesystem is best for raid 0?


DON'T.

https://raid.wiki.kernel.org/index.php/Linux_Raid

If you're thinking about raid 0, I'll suggest using btrfs instead. Just 
don't forget that, by default, btrfs mirrors the metadata (I think that 
means the directories), but does not mirror the data. Losing a disk 
means losing all the files that are on it.


What further thoughts do you have? WHY do you want a raid 0? It's not 
recommended, precisely because losing a drive means a massively 
increased risk of losing everything.


Cheers,
Wol



Re: [gentoo-user] Memory cards and deleting files.

2020-06-22 Thread antlists

On 22/06/2020 19:50, Dale wrote:
Anyway, the 8GB cards have been plenty large enough so it could be any 
number of reasons they say the limit is 32GB.  It could be they know it 
will run out of file names.  Most pics are named with four digit 
numbers. So, 9,999 files and it either stops taking pics or starts 
overwriting the files.  I haven't done the math.


The OBVIOUS reason is that the SDHC spec supports a maximum of 32GB. 
Stick a larger card in and you're likely to have problems when the card 
starts filling up.


I just thought of something.  The video camera doesn't have a format 
function.  I use a old camera that is broken to format it.  It doesn't 
take pics anymore but the display works and it formats fine.  Now I can 
format them all without having to squint.  ROFL


Or use mkfs.vfat to reformat it under linux :-)

Cheers,
Wol



Re: [gentoo-user] Memory cards and deleting files.

2020-06-22 Thread antlists

On 22/06/2020 14:19, Dale wrote:
So if I bought a 64GB card, I forced it to be formatted with FAT on 
say my Linux box here, it would work in my trail cameras anyway?  It 
makes sense.  It would seem it is more of a file system issue since 
accessing a device shouldn't be affected my its capacity, well, maybe 
some exceptions.  My trail camera may only support FAT which is only 
found on 32GB and smaller.  I can get that. 


Does it say it only supports 32GB cards or smaller? If it does, it 
probably doesn't have an SDXC slot, so you'll have interface issues. My 
TV supports up to 2TB, but only FAT, so I just have to reformat anything 
over 32GB ...


Cheers,
Wol




Re: [gentoo-user] Memory cards and deleting files.

2020-06-22 Thread antlists

On 22/06/2020 11:56, Walter Dnes wrote:

On Mon, Jun 22, 2020 at 11:28:17AM +0100, Neil Bothwick wrote


The SD standard says >33G should use exFAT, this is why many devices
state they only support cards up to 32G. The really mean they only
support FAT. My Dashcam is like this but it happily works with a 128G
card, once I reformatted it with FAT.

Warning; that still does not change the fact that each individual file
cannot exceed 4G in size on regular FAT.

Warning 2: I did exactly that, and it LOOKED like it was working 
happily, until it overflowed some internal limit and my 1G card turned 
into a 128M card or whatever it was. Have you actually TESTED that card 
IN THE DASHCAM and made sure it can actually use that 128G? Or will it 
only be able to use 4G of that card?


Oh - and 32GB cards are physically different from 128GB cards because 
they work to different standards. That's why so many of the old "real 
SD" card devices only ever use up to 2GB. You CAN (or could) get 4GB SD 
cards, but they were rare, so most people couldn't find them. The SD 
standard was replaced by SDHC, which is why your fileformat changes at 
32GB, which is the maximum capacity of an SDHC card. Above 32GB it's 
SDXC, which is another reason why sticking a larger card into a device 
which says "up to 32GB" is a bad idea - it may not be able to handle SDXC.


Cheers,
Wol




Re: [gentoo-user] Memory cards and deleting files.

2020-06-22 Thread antlists

On 22/06/2020 20:22, Neil Bothwick wrote:

Warning 2: I did exactly that, and it LOOKED like it was working
happily, until it overflowed some internal limit and my 1G card turned
into a 128M card or whatever it was. Have you actually TESTED that card
IN THE DASHCAM and made sure it can actually use that 128G? Or will it
only be able to use 4G of that card?



There are a lot of SD cards with fake capacities, they appear to be large
but are actually a smaller card reprogrammed to do so. You only find out
when you go beyond the true capacity. There are tools to check this, such
as sys-block/f3.


Did you look at the card sizes I quoted? :-) I think the reason I tried 
a 1GB card was because the camera said it took a max of 512MB. It didn't 
work ... (Oh and the card was good. It was small by the standards of the 
day.)


Cheers,
Wol



Re: [gentoo-user] Memory cards and deleting files.

2020-06-23 Thread antlists

On 22/06/2020 21:42, Neil Bothwick wrote:

On Mon, 22 Jun 2020 20:40:49 +0100, antlists wrote:


Warning 2: I did exactly that, and it LOOKED like it was working
happily, until it overflowed some internal limit and my 1G card
turned into a 128M card or whatever it was. Have you actually TESTED
that card IN THE DASHCAM and made sure it can actually use that
128G? Or will it only be able to use 4G of that card?



There are a lot of SD cards with fake capacities, they appear to be
large but are actually a smaller card reprogrammed to do so. You only
find out when you go beyond the true capacity. There are tools to
check this, such as sys-block/f3.


Did you look at the card sizes I quoted? :-) I think the reason I tried
a 1GB card was because the camera said it took a max of 512MB. It
didn't work ... (Oh and the card was good. It was small by the
standards of the day.)


So that was SDHC vs SD? Or was it even that? 2GB was the original SD
limit IIRC correctly,


You don't iirc correctly :-) but it was the limit for cards that were 
manufactured, for the most part. By the time 4GB cards became common, 
the SDHC spec was out, and most (all?) 4GB cards were SDHC. Right pain 
if your device had an SD reader, because they couldn't read SDHC cards 
:-( You could stick an old SD card in a new SDHC reader, but not the 
other way round.


That's why nearly all cards nowadays are 32GB min - that's the smallest 
SDXC size.



so you weren't trying to use the wrong spec. Was it
just a case of a faulty card, either through accident or design.


It wasn't - I can't remember what it was, but it was some Olympus format 
that - iirc - only fitted Olympus cameras. And it was something along 
the lines of the smallest cards available were larger than the max 
capacity of the camera I wanted it for (that camera might actually still 
be in use ... :-)


However, those cards were more expensive than current huge cards, so the
temptation to sell fakes would have been even greater. Many years ago I
got burned like that with a large (for the time) capacity USB stick
bought on Ebay.


Cheers,
Wol



Re: [gentoo-user] "masked by: EAPI 7" trying up update "portage" - how to proceed

2020-06-14 Thread antlists

On 14/06/2020 08:01, n952162 wrote:

I think the problem is, vbox's NAT interface acts as a router, but only
uni-directionally.  That means, it will establish a "connection" for
VM-initiated sessions, but there's no mechanism for establishing a
session for external-initiated sessions.

Somebody, please correct me if that is wrong or incomplete.


I think that's true of any NAT interface.

Can't remember what it's called (bridge, maybe?), but that asks your 
local LAN DHCP for an IP address, so you could ssh in from the local 
network. It *should* be as simple as going into VirtualBox and changing 
the setting.


Cheers,
Wol



Re: [gentoo-user] MS-Teams on XFCE4/Gentoo

2020-06-20 Thread antlists

On 18/06/2020 12:03, Dr Rainer Woitok wrote:

Neil,

On Thursday, 2020-06-18 09:32:35 +0100, you wrote:


...
Am I getting old or do others also wish they could just get a text howto
instead of watching a video every time they want to do something new?


Don't know anything about your age  ...  but you should know  you're not
alone ... :-)


I've NEVER been a visual person.

Brought up in a house with books and no TV, and given the choice I would 
still live in one.


What was that sig? - "The sole purpose of X is to enable you to have 
multiple xterms open at once."


Cheers,
Wol



Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread antlists

On 16/06/2020 12:26, Dale wrote:
I've also read about the resilvering problems too.  I think LVM 
snapshots and something about BTFS(sp?) has problems.  I've also read 
that on windoze, it can cause a system to freeze while it is trying to 
rewrite the moved data too.  It gets so slow, it actually makes the OS 
not respond.  I suspect it could happen on Linux to if the conditions 
are right.



Being all technical, what seems to be happening is ...

Random writes fillup the PMR cache. The drive starts flushing the cache, 
but unfortunately you need a doubly linked list or something - you need 
to be able to find the physical block from the logical address (for 
reading) and to find the logical block from the physical address (for 
cache-flushing). So once the cache fills, the drive needs "down time" to 
move stuff around, and it stops responding to the bus. There are reports 
of disk stalls of 10 minutes or more - bear in mind desktop drives are 
classed as unsuitable for raid because they stall for *up* *to* *two* 
minutes ...


I guess this is about saving money for the drive makers.  The part that 
seems to really get under peoples skin tho, them putting those drives 
out there without telling people that they made changes that affect 
performance.  It's bad enough for people who use them where they work 
well but the people that use RAID and such, it seems to bring them to 
their knees at times.  I can't count the number of times I've read that 
people support a class action lawsuit over shipping SMR without telling 
anyone.  It could happen and I'm not sure it shouldn't.  People using 
RAID and such, especially in some systems, they need performance not 
drives that beat themselves to death.


Most manufacturers haven't been open, but at least - apart from WD - 
they haven't been stupid either. Bear in mind WD actively market their 
Red drives as suitable for NAS or Raid, putting SMR in there was 
absolutely dumb. Certainly in the UK, as soon as news starts getting 
round, they'll probably find themselves (or rather their retailers will 
get shafted with) loads of returns as "unfit for purpose". And, 
basically, they have a legal liability with no leg to stand on because 
if a product doesn't do what it's advertised for, then the customer is 
*entitled* to a refund.


Dunno why, I've never been a WD fan, so I dodged that bullet. I just 
caught another one, because I regularly advise people they shouldn't be 
running Barracudas, while running two myself ... :-)


Cheers,
Wol



Re: [gentoo-user] Testing a used hard drive to make SURE it is good.

2020-06-16 Thread antlists

On 16/06/2020 13:25, Rich Freeman wrote:

And of course the problem with these latest hidden SMR drives is that
they generally don't support TRIM,


This, I believe, is a problem with the ATA spec. I don't understand 
what's going on, but something like for these drives you need v4 of the 
spec, and only v3 is finalised. Various people have pointed out holes in 
this theory, so you don't need to add to them :-) But yes, I do 
understand that apparently there is no official standard way to send a 
trim to these drives ...



so even repeated sequential writes
can be a problem because the drive doesn't realize that after you send
block 1 you're going to send blocks 2-100k all sequentially.  If it
knew that then it would just start overwriting in place obliterating
later tracks, since they're just going to be written next anyway.


No it can't do that. Because when it overwrites the end of the file it 
will be obliterating other random files that aren't going to be 
overwritten ...



Instead this drive is going to cache every write until it can
consolidate them, which isn't terrible but it still turns every seek
into three (write buffer, read buffer, write permanent - plus updating
metadata). 


Which IS terrible if you don't give the drive down-time to flush the 
buffer ...



If they weren't being sneaky they could have made it
drive-managed WITH TRIM so that it worked more like an SSD where you
get the best performance if the OS uses TRIM, but it can fall back if
you don't.  Sequential writes on trimmed areas for SMR should perform
identically to writes on CMR drives.


You're forgetting one thing - rewriting a block on SSD or CMR doesn't 
obliterate neighbouring blocks ... with SMR for every track you rewrite 
you have to salvage the neighbouring track too ...


Cheers,
Wol



Re: [gentoo-user] Encrypting a hard drive's data. Best method.

2020-06-07 Thread antlists

On 07/06/2020 10:07, antlists wrote:

I think it was LWN, there was an interesting article on crypto recently.


https://lwn.net/Articles/821544/

Cheers,
Wol



Re: [gentoo-user] Encrypting a hard drive's data. Best method.

2020-06-06 Thread antlists

On 06/06/2020 08:49, Dale wrote:
First drive seems to have died.  Got part way copying files and things 
got interesting.  When checking smartctrl, it even puked on my 
keyboard.  Drive only had a few hundred hours on it so maybe the drive 
was iffy from the start or that enclosure did damage somehow. Either 
way, drive two being tested.  Running smartctrl test first and then 
restart from scratch and fill it up with files or something.


Take it out the enclosure and it might be fine. I regularly have drives 
"die" in an enclosure and then work fine when I take them out.


That's why I bought an open bay - it's eSATA and the only bit of the 
drive that is enclosed is the connectors. Keeps the drive from cooking ...


Oh - the other thing - if it's PMR and you're copying files onto it, 
expect a puke! That thing on WD Reds going PMR, I copied most of that on 
to the linux raid mailing list and the general feeling I get is "PMR is 
bad".


Cheers,
Wol



Re: [gentoo-user] Encrypting a hard drive's data. Best method.

2020-06-06 Thread antlists

On 06/06/2020 11:32, Michael wrote:

Of particular interest to me is recovery of encrypted files/partitions, using
a different installation than the original.  Having to keep a copy of the
original installation kernel keys for ext4 with any data backups and
additionally remembering to refresh them every time a new kernel is installed,
adds to the user-un-friendliness of an encryption method.


Just to throw a BIG monkey-wrench into the picture, be careful if you 
install or upgrade any operating system ...


One of the problems that crops up every now and then on the raid mailing 
list is "intelligent" utilities writing an MBR or GPT without asking...


And the latest one was an upgrade to debian. Something seemed to have 
written a GPT to /dev/md0 which obviously didn't do the array much good 
... it always used to be just writing to a hard drive like /dev/sdX and 
now it seems to be writing to other block devices as well :-(


Cheers,
Wol



Re: [gentoo-user] Hard drive screws

2020-06-06 Thread antlists

On 06/06/2020 09:23, Michael wrote:

Yes, getting the thread wrong and damaging the female thread in the enclosure,
while thinking this/almost/  fits, is not good for your nerves.  There are
thread gauges which you can match the pitch of a screw/bolt and help determine
the thread specification, but they are typically used for larger screws/holes:


I've never had any trouble BUT ... there are two different incompatible 
threads. Drive screws don't fit the case, and case screws don't fit the 
drive, despite them looking pretty similar ...


Cheers,

Wol



Re: [gentoo-user] Encrypting a hard drive's data. Best method.

2020-06-06 Thread antlists

On 06/06/2020 14:57, antlists wrote:
Oh - the other thing - if it's PMR and you're copying files onto it, 
expect a puke! That thing on WD Reds going PMR, I copied most of that on 
to the linux raid mailing list and the general feeling I get is "PMR is 
bad".

Whoops have I got my PMR and SMR mixed up ... ?

Cheers,
Wol



Re: [gentoo-user] Hard drive screws

2020-06-07 Thread antlists

On 07/06/2020 10:50, J. Roeleveld wrote:

On 7 June 2020 09:41:16 CEST, antlists  wrote:

On 06/06/2020 20:14, J. Roeleveld wrote:

One of my old cases had plastic strips with little sticks on them

that would fit into the screwholes. Those strips would then slot into
the mounting points for the disks.


No messing around with screws and really easy to swap drives. They

would be perfectly mounted as well.


Too bad I don't see the same with most other cases.


I remember that. Compaqs with 75 MEGA Hz cpu's iirc.

Cheers,
Wol


Not just Compaq. I think mine was a coolermaster case at the time.

Toolless hotswap is a useful feature when regularly swapping drives.

These weren't hotswap (just ordinary IDE), but it's a damn sight easier 
putting the rails on a drive on a desk, rather than putting the screws 
in a drive in a case :-)


Cheers,
Wol



Re: [gentoo-user] new genkernel problem

2020-06-11 Thread antlists

On 06/06/2020 23:11, Neil Bothwick wrote:

You don't boot from an encrypted drive (yet) or use unusual hardware,
that's what I meant by a plain system. Dracut handles booting from a a
btrfs root on a LUKS encrypted block device here with no fancy
configuration. It really is impressive the way it figures so much out for
itself.


All I need is for it to figure out dm-integrity, and it'll boot my setup 
fine ... hard-drive -> dm-integrity -> md-raid -> lvm -> filesystem


I'm trying to get round the fact that a damaged disk will mess up raid, 
so I want to add just that little bit more robustness :-)


Cheers,
Wol



Re: [gentoo-user] new genkernel problem

2020-06-12 Thread antlists

On 06/06/2020 23:11, Neil Bothwick wrote:

You don't boot from an encrypted drive (yet) or use unusual hardware,
that's what I meant by a plain system. Dracut handles booting from a a
btrfs root on a LUKS encrypted block device here with no fancy
configuration. It really is impressive the way it figures so much out for
itself.


All I need is for it to figure out dm-integrity, and it'll boot my setup 
fine ... hard-drive -> dm-integrity -> md-raid -> lvm -> filesystem


I'm trying to get round the fact that a damaged disk will mess up raid, 
so I want to add just that little bit more robustness :-)


Cheers,
Wol



Re: [gentoo-user] Encrypting a hard drive's data. Best method.

2020-06-07 Thread antlists

On 06/06/2020 21:12, Rich Freeman wrote:

  To do this I'm just going to store my
keys on the root filesystem so that the systems can be booted without
interaction.  Obviously if somebody compromises the files with the
keys they can decrypt my drives, but this means that I just have to
protect a couple of SD cards which contain my root filesystems,
instead of worrying about each individual hard drive.  The drives
themselves end up being much more secure, because the password used to
protect each drive is random and long - brute-forcing the password
will be no easier than brute-forcing AES itself.  This doesn't protect
me at all if somebody breaks into my house and steals everything.


On the other hand, if you're always present at boot, stick the keys on a 
USB that has to be in the laptop when it starts. If that's on your 
(physical) keyring, chances are it won't be compromised at the same time 
as the laptop - and hopefully the attacker won't realise it's needed for 
boot :-)


(yes I know - security through obscurity is bad as your MAIN defence, 
but a few layers on top of something secure just makes life more of a 
pain for an attacker :-)


Cheers,
Wol



Re: [gentoo-user] Hard drive screws

2020-06-07 Thread antlists

On 06/06/2020 20:14, J. Roeleveld wrote:

One of my old cases had plastic strips with little sticks on them that would 
fit into the screwholes. Those strips would then slot into the mounting points 
for the disks.

No messing around with screws and really easy to swap drives. They would be 
perfectly mounted as well.

Too bad I don't see the same with most other cases.


I remember that. Compaqs with 75 MEGA Hz cpu's iirc.

Cheers,
Wol



Re: [gentoo-user] Encrypting a hard drive's data. Best method.

2020-06-07 Thread antlists

On 07/06/2020 09:08, Dale wrote:
I notice that one can use different encryption tools.  I have Blowfish, 
Twofish, AES and sha*** as well as many others.  I know some have been 
compromised.  Which ones are known to be secure?  I seem to recall that 
after Snowden some had to be redone and some new ones popped up to make 
sure they were secure.  Thoughts??


Some had to be redone ... Elliptic Cryptograph Curve or whatever it's 
called. The basic maths is secure, but the NSA got a standard released 
(you have to pick a set of constants) where the constants had been 
nobbled. DJB has released a different set of constants (ECD25519) which 
is thought to be secure.


I think it was LWN, there was an interesting article on crypto recently.

Cheers,
Wol



Re: [gentoo-user] Re: Local mail server

2020-07-29 Thread antlists

On 29/07/2020 16:41, Peter Humphrey wrote:

On Wednesday, 29 July 2020 13:59:11 BST Grant Edwards wrote:


Pricing isn't based on cost.  Pricing is based on what people are
willing to pay.  People are willing to pay extra for a static IPv6
address, therefore static IPv6 addresses cost extra.


Aren't all IPv6 addresses static? Mine certainly are.


I think there's static, and there's effectively static.

If your router is running 24/7, then the IP won't change even if it's 
DHCP. But your router only needs to be switched off or otherwise off the 
network for the TTL (time to live), and DHCP will assign you a different 
IP when it comes back.


That's server-side configuration, so if the ISP doesn't elicitly 
allocate you an address in their DHCP setup, what you've got is 
effectively static not really static.


But it really should be so damn simple - take the ISP's network address, 
add the last three octets of the customer's router or something like 
that, and there's the customer's network v6 assigned to the customer's 
router. One fixed address that won't change unless the customer changes 
router or ISP.


I need to learn how v6 works ... :-)

Cheers,
Wol



Re: [gentoo-user] Local mail server

2020-07-30 Thread antlists

On 30/07/2020 12:13, Remco Rijnders wrote:

An IPv6 address is 128 bits in length. Usually an ISP allocates 64
bits to a single customer, allowing the systems on/behind that
connection to automatically assign themselves an address based on
their MAC address for example. Note that also allocations bigger than
64 bits are common so customers get 70 or 76 bits to use and can use
multiple subnets on their home/business networks.


I don't think an ISP is supposed to allocate less ...

As I understood it, the first 64 bits are the "network address", ie 
sort-of assigned to the edge router, and the remaining 64 bits are 
assigned by the network operator.


So in your scenario of customers getting more bits, they are effectively 
being assigned 2^6 or 2^12 network addresses. Exactly the scenario 
planned for high-level ISPs parcelling out address space to low-level ISPs.


And looking at the wikipedia page, it looks like the ISP *must* allocate 
at least a /64, because the spec says each device allocates itself a 
least-significant-64 address at random using a collision-detect 
protocol. Which is why many simplistic algorithms include the MAC 
address to (try to) guarantee a unique address on the first attempt.


Cheers,
Wol



Re: [gentoo-user] Local mail server

2020-07-30 Thread antlists

On 30/07/2020 14:28, Remco Rijnders wrote:
On Thu, Jul 30, 2020 at 01:48:05PM +0100, antlists wrote in 
:

I don't think an ISP is supposed to allocate less ...


I think your original message was open for multiple interpretations,
or at least I read it as you saying there are 32 bit addresses the ISP
allocates from. I now see the alternate one and the one you probably
intended that there is 32 bits worth of /64's to hand out to
customers. I'm sorry for misunderstanding at first.

Yes, a mimimum of /64 is what is recommended (and needed to make
stateless auto configuration work on the customers end). Whether the
/64 you get allocated is dynamic or static, can still depend on the
ISP's practises and business model.

No problem. Many people aren't native English speakers (and I can get a 
little bit hot under the collar when Americans claim to speak English 
:-) so I have no problem with mis-understandings.


Besides English I speak three other languages ranging from "get by" to 
"struggling", so I well understand all the problems caused by implicit 
nuances, differences in grammar, different mind-sets etc :-)


Cheers,
Wol



Re: [gentoo-user] Local mail server

2020-07-20 Thread antlists

On 20/07/2020 15:55, Peter Humphrey wrote:

fatal: in parameter smtpd_relay_restrictions or smtpd_recipient_restrictions,
specify at least one working instance of: reject_unauth_destination,
defer_unauth_destination, reject, defer, defer_if_permit or
check_relay_domains

Which of those restrictions do I specify, and where, and why aren't they set
by default?


I'm guessing that's because it needs to know what to do with an email ...

The language is odd, but I suspect it's saying "do I relay this message 
and if so how, or do I deliver and and if so how do I know where and to 
who?"


None of these can be known by default...

Cheers,
Wol



Re: [gentoo-user] Re: Your opinion on jpeg encoders, please

2021-01-11 Thread antlists

On 11/01/2021 19:26, Frank Steinmetzger wrote:

I don’t really live the RAW way. They take up sooo much space and my
camera’s OOC jpegs always look far nicer than anything I can produce with
darktable/rawtherapee.


Okay, dunno about your Olympus stuff, but my Nikon cameras, the Nikon 
software is supposed to "load raw, save as jpeg", and it should come out 
identical to the jpeg that came off the camera.


I do know, though, that some software (quite possibly that Nikon stuff) 
won't run under wine, blowing up with "unknown version of Windows" or 
something like that. So you might have the same problem ...


Cheers,
Wol



Re: [gentoo-user] Wayland side-effect?

2020-12-27 Thread antlists

On 27/12/2020 18:51, Michael wrote:

Restarting the desktop using Xorg does NOT fix this problem.  Otherwise, both
Plasma on Wayland and Xorg work fine - except the clipboard does not work on
Wayland (middle click won't paste selected text on another window).


This sounds to me like a side-effect of Wayland security. I don't really 
know anything about it, but I get the impression that talking between 
windows is an unsolved security problem.


Of course, it could be it's a solved problem, but you've hit a glitch...

Cheers,
Wol



Re: [gentoo-user] Re: Your opinion on jpeg encoders, please

2021-01-07 Thread antlists

On 07/01/2021 02:22, Nikos Chantziaras wrote:

On 04/01/2021 23:37, Frank Steinmetzger wrote:
However I noticed that the latter procuces larger files for the same 
quality
setting. So currently, I first save with a very high setting from 
Showfoto

and then recompress the whole directory in a one-line-loop using
imagemagick’s convert.


You lose some extra quality when doing this due to recompression. What 
you should do is save in a lossless format (like png or bmp) and then 
convert that to jpg.


If you're doing that (which I recommend), I set my camera to "raw + 
jpeg", and then dump the raw files to DVD. That way, it doesn't matter 
what happens to the jpegs as you can always re-create them.


If you've only got jpegs (why are you using a rubbish camera :-) then 
just dump the original jpegs to DVD - that's why what Google do is so 
bad - they compress it to upload it from your Android phone, and then 
delete the original! AND THAT'S THE DEFAULT !!!


Cheers,
Wol



Re: [gentoo-user] duplicate gentoo system - errors

2020-11-25 Thread antlists

On 25/11/2020 15:17, Rich Freeman wrote:

On Wed, Nov 25, 2020 at 8:55 AM Wols Lists  wrote:


On 25/11/20 13:31, Rich Freeman wrote:

Now, one area I would use UUIDs is with mdadm if you're not putting
lvm on top.  I've seen mdadm arrays get renumbered and that is a mess
if you're directly mounting them without labels or UUIDs.


It is recommended to use names, so I call it by what it is, so I have
things like /dev/md/gentoo, /dev/md/home etc.


Is that supported with the original metadata format?  I suspect that
was a big constraint since at the time my bootloader didn't support
anything newer.

Which format is this? version zero (i.e. just mdadm.conf), or 0.9 which 
is the kernel auto-assembly version. Either way, they're obsolete and 
bit-rotting.


I guess you do need version 1 (the difference between 1.0, 1.1 and 1.2 
is the location of the superblock, not the layout).


Cheers,
Wol



Re: [gentoo-user] duplicate gentoo system - errors

2020-11-25 Thread antlists

On 25/11/2020 15:13, Dale wrote:

I can't think of a reason not to use labels, at the very least, in most
situations.  The only one I can think of, a laptop that has only one
hard drive.  Sort of hard to install two hard drives on a laptop.  A
external one can be done but never seen one with two spots for internal
hard drives.  Do they make those???


I'm writing this on one of those right now ...

Windows on the first drive, and SUSE and gentoo on the second, except I 
can't get SUSE to realise I want the boot files on the first drive, so 
of course EFI can't find it to boot it.


(SUSE would put the boot files in the right place if I did an "expert 
partition" jobbie, but I don't want to do that seeing as I've never 
played with EFI before.)


Cheers,
Wol



Re: [gentoo-user] duplicate gentoo system - errors

2020-11-25 Thread antlists

On 25/11/2020 22:59, Neil Bothwick wrote:

On Wed, 25 Nov 2020 13:37:48 -0600, Dale wrote:


First I've heard of a laptop having space for two hard drives.  I
need to make a note of that.  Now one has reason to use labels on
laptops too.  o_O

You already have. what if you boot with a flash drive connected and
it is recognised first?


I wasn't counting a external device.  I was just referring to internal
hard drives.  I don't even put flash drives in the same category as a
hard drive either, even tho they can get large.


Large in capacity but physically small. It's easy to reboot and not
realise you have a flash drive connected, which could possibly mess up
your drive naming.

Yup. I've forgotten which system it was - possibly this one - but I've 
got a system which will refuse to boot if I forget I've got a USB stick 
in it ...


Cheers,
Wol



Re: [gentoo-user] Re: Grub and multiple distros on LVM [was duplicate gentoo system ...]

2020-11-25 Thread antlists

On 25/11/2020 23:03, Neil Bothwick wrote:

On Wed, 25 Nov 2020 19:37:32 - (UTC), Grant Edwards wrote:


I'm not sure chainloading would work as that requires a drive
definition from which to load the boot sector.


I thought that's what LVM provided was a drive definition.


It's more like a partition definition, GRUB requires the boot
sector/MBR of a whole drive.


I'm asking about chainloading. Grub has already been loaded via MBR
and grub's partition (which can be a normal physical partition if
needed). Grub is now running and displaying its menu. Each of the menu
entries instructs grub to load the first sector of a specified
partition into RAM and execute it. That sector can contain grub, LILO,
windows boot manager, whatever.  If grub understands LVM volumes, then
can it read that first sector from an LVM volume instead of a physical
partition?


I think the only way to find out is to try it, but my gut feeling about
this is not good. However, I'd be happy for my gut to be proved wrong.


Well, it should be able to do it with raid (you can partition an md-raid 
volume), so maybe the same with LVM?


Cheers,
Wol



Re: [gentoo-user] Re: Switching default tmpfiles and faster internet coming my way.

2020-12-06 Thread antlists

On 06/12/2020 07:55, Martin Vaeth wrote:

Dale  wrote:

It sounds like a rather rare problem. Maybe even only during boot up.



It is a non-existent problem on openrc if you clean /tmp and /var/tmp
on boot (which you should do if you use opentmp):


Which breaks a lot of STANDARDS-COMPLIANT software.

/var/tmp is *specified* as "surviving a reboot", so cleaning it on 
startup is not merely non-standard, but *forbidden* by the standard - 
said standard being the Filesystem Hierarchy Standard ...


For example, editors assume /var/tmp is a safe place to stash their 
files so they can recover from a system crash.


(I used to mount /var/tmp as a tmpfs until I found that out ...)

Cheers,
Wol



Re: [gentoo-user] Re: Switching default tmpfiles and faster internet coming my way.

2020-12-06 Thread antlists

On 06/12/2020 12:54, Rich Freeman wrote:

I think the idea of having something more cross-platform is a good
one, though there is nothing really about systemd that isn't "open" -
it is FOSS.  It just prioritizes using linux syscalls where they are
useful over implementing things in a way that work on other kernels,
which is more of a design choice than anything else.  I mean, it is no
more wrong to use linux-specific syscalls than for the linux
developers to create them in the first place.



After all, it's not as if SysVinit is portable ... hint - it ISN'T. 
Nobody uses it but linux distros stuck in the past.


Cheers,
Wol



Re: [gentoo-user] Re: Determine what's keeping Python 3.7 around?

2020-12-07 Thread antlists

On 07/12/2020 18:21, Jack wrote:
I do an emerge -C --oneshot to uninstall those packages. That way, 
when emerge finally starts to update world, it pulls them all back (at 
least, the ones that are needed) itself without me needing to worry 
about it.


I don't think the --oneshot is doing anything here.  It just prevents 
adding an atom to the world file when emerging.  Besides, in this case, 
you do want it removed (if it was there) because, as you say, it will 
just get pulled in again if it really is needed by something else.


I assume it also stops *removing* an atom from the world file, if it's 
something I added. And I do it as a matter of course, because it can't 
do any harm ... :-)


Cheers,
Wol



Re: [gentoo-user] update fails, but I don't see why

2020-12-03 Thread antlists

On 03/12/2020 20:33, n952162 wrote:

I'm trying to update the gentoo system that I last updated 6 weeks ago,
but it seems not to work.  Can somebody explain to me why?


I've got a similar problem - an "emerge --sync" said "portage has been 
updated, you really should emerge it first before doing anything else". 
So I tried.


And it blew up very similarly to you, with loads of python problems. So 
I gambled on updating python (which it did) but no dice - portage still 
won't update.


(This is a new system that I'm still trying to setup.)

Cheers,
Wol



Re: [gentoo-user] Re: Determine what's keeping Python 3.7 around?

2020-12-07 Thread antlists

On 07/12/2020 14:30, Grant Edwards wrote:

I ended up uninstalling packages mentioned in those 150 lines 2-3 at a
time and until emerge was willing to update world.



After that I guess I start trying to re-install what was removed.


I do an emerge -C --oneshot to uninstall those packages. That way, when 
emerge finally starts to update world, it pulls them all back (at least, 
the ones that are needed) itself without me needing to worry about it.


This is where I wish there was an option similar to --keep-going, that 
instead of the dependency calculation aborting when it gets in a mess, 
it just stops the dependency calculation and emerges what it can. I find 
just emerging a random selection of the things it says it can, it 
usually does eventually get there ...


Cheers,
Wol



  1   2   3   >