Re: [gentoo-user] Re: swaps mounted randomly [not out of the woods yet]

2020-03-18 Thread Dale
Neil Bothwick wrote:
> On Wed, 18 Mar 2020 16:46:35 -0700, Ian Zimmerman wrote:
>
>> On 2020-03-18 18:25, Dale wrote:
>>
>>> BTW, can a label be changed without redoing the file system? I seem to
>>> recall that being done during the file system creation.  
>> Yes, e2label for ext[2-4] , fatlabel for vfat.  Don't know about others
>> but probably most of them allow something similar.
> For btrfs: btrfs filesystem label
> For swap:  swaplabel
>
>


Thanks to both.  I use ext2 and ext4 but the others may help someone
else who runs up on this.  I actually did a typo once but noticed it
before I used the file system for anything and just used up arrow and
corrected my typo. 

Thanks again.

Dale

:-)  :-) 



Re: [gentoo-user] Re: swaps mounted randomly [not out of the woods yet]

2020-03-18 Thread Neil Bothwick
On Wed, 18 Mar 2020 16:46:35 -0700, Ian Zimmerman wrote:

> On 2020-03-18 18:25, Dale wrote:
> 
> > BTW, can a label be changed without redoing the file system? I seem to
> > recall that being done during the file system creation.  
> 
> Yes, e2label for ext[2-4] , fatlabel for vfat.  Don't know about others
> but probably most of them allow something similar.

For btrfs: btrfs filesystem label
For swap:  swaplabel


-- 
Neil Bothwick

Confucius says "He who posts with broken addresses gets no replies.."


pgpgUX7gMXow9.pgp
Description: OpenPGP digital signature


[gentoo-user] Re: swaps mounted randomly [not out of the woods yet]

2020-03-18 Thread Ian Zimmerman
On 2020-03-18 18:25, Dale wrote:

> BTW, can a label be changed without redoing the file system? I seem to
> recall that being done during the file system creation.

Yes, e2label for ext[2-4] , fatlabel for vfat.  Don't know about others
but probably most of them allow something similar.

-- 
Ian



Re: [gentoo-user] Re: swaps mounted randomly [not out of the woods yet]

2020-03-18 Thread Dale
Ian Zimmerman wrote:
> On 2020-03-18 22:57, n952162 wrote:
>
>> Well, some new recognitions ...
>>
>> It turns out that those /dev/disk subdirectories don't necessarily have
>> all the disk devices represented:
>>
>> 1. by-id/
>> 2. by-partuuid/
>> 3. by-path/
>> 4. by-uuid/
> There is also by-label, which you can reference from fstab like
>
> LABEL=foobar /home ext4 defaults ...
>
> If predictability and readability is the goal, I think using labels is
> the best option, because you have complete control over them, unlike the
> device IDs.  For example:
>
> LABEL=my-machine-home-part /home ext4 defaults ...
>
> This doesn't solve your underlying timing problem, of course.  Just apropos.
>


Since I use LABEL for most of mine.  Here is some real world examples.


LABEL=root  /   ext4    defaults    0 1
LABEL=boot  /boot   ext2    defaults    0 2
LABEL=usr   /usr    ext4    defaults    0 2
LABEL=var   /var    ext4    defaults    0 2


Those are a few from my fstab.  I actually use those and it works.  Hope
that helps.

BTW, can a label be changed without redoing the file system?  I seem to
recall that being done during the file system creation. 

Dale

:-)  :-) 



Re: [gentoo-user] Testing ebuilds

2020-03-18 Thread Jack

On 2020.03.18 18:59, Ian Zimmerman wrote:
After a hiatus I am trying to create my own ebuild repository again.   
I
need a way to test the separate steps (fetch, prepare, comiple,  
install

etc.) and I would like to do all of them as a regular user (not root,
not portage).  I tried what I thought was the most natural attempt -  
run
the ebuild program under fakeroot, but it still breaks trying to  
change
permissions of things in PORTAGE_TMPDIR (of course override the value  
of

this variable).  I don't understand this - it looks just like what
fakeroot was intended to help with?  Anyway, I'm not married to  
fakeroot,

just looking for a way to do these test runs.

I remember that I could do this the first time, a couple of years ago.
But I don't remember how :-(
This is from memory - not currently tested, but I often do the various  
ebuild steps as myself, without a fakeroot, and not overriding the  
system PORTAGE_TMPDIR..  Again, from memory, as long as I opened  
permissions on PORTAGE_TMPDIR so I could create the necessary  
directories, I could do all steps as myself except for the final  
qmerge.  Once you create the group and package directories under  
PORTAGE_TMPDIR, there shouldn't be any problems.  The final qmerge step  
clearly requires root, unless you are using a chroot to install  
somewhere other than the real system directories.


Jack


[gentoo-user] Re: swaps mounted randomly [not out of the woods yet]

2020-03-18 Thread Ian Zimmerman
On 2020-03-18 22:57, n952162 wrote:

> Well, some new recognitions ...
> 
> It turns out that those /dev/disk subdirectories don't necessarily have
> all the disk devices represented:
> 
> 1. by-id/
> 2. by-partuuid/
> 3. by-path/
> 4. by-uuid/

There is also by-label, which you can reference from fstab like

LABEL=foobar /home ext4 defaults ...

If predictability and readability is the goal, I think using labels is
the best option, because you have complete control over them, unlike the
device IDs.  For example:

LABEL=my-machine-home-part /home ext4 defaults ...

This doesn't solve your underlying timing problem, of course.  Just apropos.

-- 
Ian



[gentoo-user] Testing ebuilds

2020-03-18 Thread Ian Zimmerman
After a hiatus I am trying to create my own ebuild repository again.  I
need a way to test the separate steps (fetch, prepare, comiple, install
etc.) and I would like to do all of them as a regular user (not root,
not portage).  I tried what I thought was the most natural attempt - run
the ebuild program under fakeroot, but it still breaks trying to change
permissions of things in PORTAGE_TMPDIR (of course override the value of
this variable).  I don't understand this - it looks just like what
fakeroot was intended to help with?  Anyway, I'm not married to fakeroot,
just looking for a way to do these test runs.

I remember that I could do this the first time, a couple of years ago.
But I don't remember how :-(

-- 
Ian



Re: [gentoo-user] swaps mounted randomly [not out of the woods yet]

2020-03-18 Thread n952162

Well, some new recognitions ...

It turns out that  those /dev/disk subdirectories don't necessarily have
all the disk devices represented:

1. by-id/
2.   by-partuuid/
3.   by-path/
4.   by-uuid/

On my computer, only by-id/ - the subdirectory that's not set up in time
- has  links to /dev/sda?. Furthermore, by-path/ has even fewer of my
drives represented in it.  I haven't a clue what's going on.  Since my
swap file is on /dev/sda1, there's no way I can put it into/etc/fstab
because I can't ensure the underlying filesystem is mounted in time.

Another enigma - although /etc/init.d/localmount says:

description="Mounts disks and swap according to /etc/fstab."

there's also /etc/init.d/swap, which calls swapon.

TODO: does mount -a mount swaps? ... I suspect not - the /description/
is apparently bogus.


On 2020-03-18 19:35, n952162 wrote:


Okay, after many hours of investigation and experimentation, I finally
figured this out.

For many years now, I have written my /etc/fstab using the
/dev/disk/by-id directory, because I think it's easier to keep track
of the names there - it's the only way to do it symbolically.

It's always struck me that nobody else does that and I've wondered
why...  now I know...

It turns out, that, now, on my computer, those names aren't set up
yet.  The "missing" devices caused the localmounts service to fail -
partially or fully, randomly - including for the swaps.

When I go through the nasty process of using blkid(8) and specifying
the *UUID=* device name in fstab, then the problem goes away.



On 2020-03-17 13:37, n952162 wrote:


There's new information on this, and new questions...

I discovered that not only are my swaps not mounted, but the other
filesystems listed in /etc/fstab aren't, either.

These are, apparently, mounted by the localmount RC service.

Its status is *started*.  But if I /restart/ it, then my fstab
filesystems get mounted.

Question: /etc/init.d/localhost says:

description="Mounts disks and swap according to /etc/fstab."

but I can't see where that script mounts swaps.  Is the description
obsolete?

Question 2: how can one see the output from the RC "e-trace"
statements (e.g. ebegin/eend)?

I don't find it in /var/log/*


On 2020-03-04 09:09, n952162 wrote:

Hi,

I have 3 swap devices and files.  At boot, it seems indeterminate
which ones get "mounted" (as swap areas).

Does anyone have an idea why they're not all mounted?

Here are the swap lines from my fstab:

#LABEL=swap        none        swap        sw        0 0

 /dev/disk/by-id/ata-TOSHIBA_DT01ACA300_-part1    none
swap    sw,pri=10    0 0

/swap    none    swap    sw,pri=5    0 0

/lcl/WDC_WD20EFRX-68EUZN0_WD-yy/1/swap    none swap   
sw,pri=1    0 0



Re: [gentoo-user] Re: SDD strategies...

2020-03-18 Thread Neil Bothwick
On Wed, 18 Mar 2020 10:47:12 -0400, Rich Freeman wrote:

> > If you rely on raid, and use spinning rust, DON'T buy cheap drives. I
> > like Seagate, and bought myself Barracudas. Big mistake. Next time
> > round, I bought Ironwolves. Hopefully that system will soon be up and
> > running, and I'll see whether that was a good choice :-)  
> 
> Can you elaborate on what the mistake was?  Backblaze hasn't found
> Seagate to really be any better/worse than anything else.  It seems
> like every vendor has a really bad model every couple of years.  Maybe
> the more expensive drive will last longer, but you're paying a hefty
> premium.  It might be cheaper to just get three drives with 3x
> redundancy than two super-expensive ones with 2x redundancy.

I know it's anecdotal, and I have somewhat fewer drives than Backblaze,
but I've found Seagate drives to be unreliable over recent years. They
were good at replacing them under warranty, but then the replacements
failed.

> The main issues I've seen with RAID are:
> 
> 1. Double failures.  If your RAID doesn't accommodate double failures
> (RAID6/etc) then you have to consider the time required to replace a
> drive and rebuild the array.  As arrays get large or if you aren't
> super-quick with replacements then you have more risk of double
> failures.

There's also the extra load on the remaining drives when rebuilding the
array, at exactly the time you cannot afford another drive to fail. RAID6
helps here and, like Mark, I try to run a mixture of drives in an array
to avoid problems caused by bad models or batches.


-- 
Neil Bothwick

You are a completely unique individual, just like everybody else.


pgp7Ivdy4ZDcn.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] swaps mounted randomly [RESOLVED]

2020-03-18 Thread Michael
On Wednesday, 18 March 2020 19:02:56 GMT n952162 wrote:
> Incidentally, in order to debug that, I set the /etc/rc.conf *rc_logger*
> variable to YES:
> 
> rc_logger="YES"
> 
> I'm not sure why that's not the default, as it appears to be on ubuntu,
> assuming that similar-seeming functionality really is similar.

Because it fills up log files, potentially unnecessarily.  When you have a 
problem you can always set it on during troubleshooting.

Glad you got it sorted.  I saw you were using /dev/disk/by-id directory 
entries and thought that was an acceptable way to define devices - but didn't 
look into it further.


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] swaps mounted randomly [RESOLVED]

2020-03-18 Thread n952162

Incidentally, in order to debug that, I set the /etc/rc.conf *rc_logger*
variable to YES:

rc_logger="YES"

I'm not sure why that's not the default, as it appears to be on ubuntu,
assuming that similar-seeming functionality really is similar.



On 2020-03-18 19:35, n952162 wrote:


Okay, after many hours of investigation and experimentation, I finally
figured this out.

For many years now, I have written my /etc/fstab using the
/dev/disk/by-id directory, because I think it's easier to keep track
of the names there - it's the only way to do it symbolically.

It's always struck me that nobody else does that and I've wondered
why...  now I know...

It turns out, that, now, on my computer, those names aren't set up
yet.  The "missing" devices caused the localmounts service to fail -
partially or fully, randomly - including for the swaps.

When I go through the nasty process of using blkid(8) and specifying
the *UUID=* device name in fstab, then the problem goes away.



On 2020-03-17 13:37, n952162 wrote:


There's new information on this, and new questions...

I discovered that not only are my swaps not mounted, but the other
filesystems listed in /etc/fstab aren't, either.

These are, apparently, mounted by the localmount RC service.

Its status is *started*.  But if I /restart/ it, then my fstab
filesystems get mounted.

Question: /etc/init.d/localhost says:

description="Mounts disks and swap according to /etc/fstab."

but I can't see where that script mounts swaps.  Is the description
obsolete?

Question 2: how can one see the output from the RC "e-trace"
statements (e.g. ebegin/eend)?

I don't find it in /var/log/*


On 2020-03-04 09:09, n952162 wrote:

Hi,

I have 3 swap devices and files.  At boot, it seems indeterminate
which ones get "mounted" (as swap areas).

Does anyone have an idea why they're not all mounted?

Here are the swap lines from my fstab:

#LABEL=swap        none        swap        sw        0 0

 /dev/disk/by-id/ata-TOSHIBA_DT01ACA300_-part1    none
swap    sw,pri=10    0 0

/swap    none    swap    sw,pri=5    0 0

/lcl/WDC_WD20EFRX-68EUZN0_WD-yy/1/swap    none swap   
sw,pri=1    0 0



Re: [gentoo-user] swaps mounted randomly [RESOLVED]

2020-03-18 Thread n952162

Okay, after many hours of investigation and experimentation, I finally
figured this out.

For many years now, I have written my /etc/fstab using the
/dev/disk/by-id directory, because I think it's easier to keep track of
the names there - it's the only way to do it symbolically.

It's always struck me that nobody else does that and I've wondered
why...  now I know...

It turns out, that, now, on my computer, those names aren't set up yet. 
The "missing" devices caused the localmounts service to fail - partially
or fully, randomly - including for the swaps.

When I go through the nasty process of using blkid(8) and specifying the
*UUID=* device name in fstab, then the problem goes away.



On 2020-03-17 13:37, n952162 wrote:


There's new information on this, and new questions...

I discovered that not only are my swaps not mounted, but the other
filesystems listed in /etc/fstab aren't, either.

These are, apparently, mounted by the localmount RC service.

Its status is *started*.  But if I /restart/ it, then my fstab
filesystems get mounted.

Question: /etc/init.d/localhost says:

description="Mounts disks and swap according to /etc/fstab."

but I can't see where that script mounts swaps.  Is the description
obsolete?

Question 2: how can one see the output from the RC "e-trace"
statements (e.g. ebegin/eend)?

I don't find it in /var/log/*


On 2020-03-04 09:09, n952162 wrote:

Hi,

I have 3 swap devices and files.  At boot, it seems indeterminate
which ones get "mounted" (as swap areas).

Does anyone have an idea why they're not all mounted?

Here are the swap lines from my fstab:

#LABEL=swap        none        swap        sw        0 0

 /dev/disk/by-id/ata-TOSHIBA_DT01ACA300_-part1    none
swap    sw,pri=10    0 0

/swap    none    swap    sw,pri=5    0 0

/lcl/WDC_WD20EFRX-68EUZN0_WD-yy/1/swap    none swap   
sw,pri=1    0 0



Re: [gentoo-user] swaps mounted randomly

2020-03-18 Thread n952162

I tried that and it failed with: openrc-run may not run directly

On 2020-03-17 14:14, Neil Bothwick wrote:

On Tue, 17 Mar 2020 13:37:52 +0100, n952162 wrote:


Question 2: how can one see the output from the RC "e-trace" statements
(e.g. ebegin/eend)?

I don't find it in /var/log/*


How about: openrc-run --verbose /etc/init.d/localmount start

Or use --debug for more info.






Re: [gentoo-user] SDD strategies...

2020-03-18 Thread Dale
antlists wrote:
> On 17/03/2020 11:54, madscientistatlarge wrote:
>> The issue is not usually end of trusted life, but rather random
>> failure.  I've barely managed to recover failed hard drives, That is
>> less likely on SSD though possibly less likely to happen.
>
> The drive may be less likely to fail, but I'd say raid or backups are
> a necessity.
>
> From what I've heard, SSDs tend to go read-only when they fail (that's
> fine), BUT SELF-DESTRUCT ON A POWER CYCLE!!!
>
> So don't bank on being able to access a failed SSD after a reboot.
>
> Cheers,
> Wol
>
>


That's something interesting to know.  If one fails, recover but don't
reboot.  Knowing that is awesome.  It may not be a guarantee but why
risk it?  At the least, do a backup first, then reboot if needed.

This old guy needs to remember that.

Dale

:-)  :-) 



Re: [gentoo-user] SDD strategies...

2020-03-18 Thread antlists

On 17/03/2020 05:59, tu...@posteo.de wrote:

Hi,

currentlu I am setting up a new PC for my 12-years old one,
which has reached the limits of its "computational power" :)

SSDs are a common replacement for HDs nowaday -- but I still trust my
HDs more than this "flashy" things...call me retro or oldschool, but
it my current "Bauchgefühl" (gut feeling).


Can't remember where it was - some mag ran a stress-test on a bunch of 
SSDs and they massively outlived their rated lives ... I think even the 
first to fail survived about 18months of continuous hammering - and I 
mean hammering!


To reduce write cycles to the SSD, which are quite a lot when using
UNIX/Limux (logging etc) and especially GENTOO (compiling sources
instead of using binary packages -- which is GOOD!), I am planning
the following setup:

The sustem will boot from SSD.

The HD will contain the whole system including the complete root
filesustem. Updateing, installing via Gentoo tools will run using
the HD. If that process has ended, I will rsync the HD based root
fileystem to the SSD.


Whatever for?


Folders, which will be written to by the sustem while running will
be symlinked to the HD.

This should work...?

Or is there another idea to setup a system which will benefit from
the advantages of a SSD by avoiding its disadvantages?


If you've got both an SSD and an HD, just use the HD for swap, /tmp, 
/var/tmp/portage (possibly the whole of /var/tmp), and any other area 
where you consider files to be temporary.


Background: I am normally using a PC a long time and try to avoid
buying things for reasons like being more modern or being newer.

Any idea to setup such a sustem is heardly welcone -- thank you
very much in advance!

Why waste time and effort for a complex setup when it's going to gain 
you bugger all.


The only thing I would really advise for is that (a) you think about 
some form of journalling - LVM or btrfs - for your root file-system to 
protect against a messed up upgrade - take a snapshot, upgrade, and if 
anything goes wrong it's an easy roll-back.


Likewise, do the same for the rotating rust, and use that to back up 
/home - you can use some option to rsync that only over-writes what's 
changed, so you do a "snapshot then back up" and have loads of backups 
going back however far ...


Cheers,
Wol



Re: [gentoo-user] Re: SDD strategies...

2020-03-18 Thread Mark Knecht
On Wed, Mar 18, 2020 at 7:47 AM Rich Freeman  wrote:
>
> It might be cheaper to just get three drives with 3x
> redundancy than two super-expensive ones with 2x redundancy.
>

If someone goes this direction then try to get the drives from multiple
sales channels to reduce the probability of all drives coming from the
same bad lot.

- Mark


Re: [gentoo-user] Re: SDD strategies...

2020-03-18 Thread Rich Freeman
On Wed, Mar 18, 2020 at 9:49 AM antlists  wrote:
>
> On 17/03/2020 14:29, Grant Edwards wrote:
> > On 2020-03-17, Neil Bothwick  wrote:
> >
> >> Same here. The main advantage of spinning HDs are that they are cheaper
> >> to replace when they fail. I only use them when I need lots of space.
> >
> > Me too. If I didn't have my desktop set up as a DVR with 5TB of
> > recording space, I wouldn't have any spinning drives at all.  My
> > personal experience so far indicates that SSDs are far more reliable
> > and long-lived than spinning HDs.  I would guess that about half of my
> > spinning HDs fail in under 5 years.  But then again, I tend to buy
> > pretty cheap models.
> >
> If you rely on raid, and use spinning rust, DON'T buy cheap drives. I
> like Seagate, and bought myself Barracudas. Big mistake. Next time
> round, I bought Ironwolves. Hopefully that system will soon be up and
> running, and I'll see whether that was a good choice :-)

Can you elaborate on what the mistake was?  Backblaze hasn't found
Seagate to really be any better/worse than anything else.  It seems
like every vendor has a really bad model every couple of years.  Maybe
the more expensive drive will last longer, but you're paying a hefty
premium.  It might be cheaper to just get three drives with 3x
redundancy than two super-expensive ones with 2x redundancy.

The main issues I've seen with RAID are:

1. Double failures.  If your RAID doesn't accommodate double failures
(RAID6/etc) then you have to consider the time required to replace a
drive and rebuild the array.  As arrays get large or if you aren't
super-quick with replacements then you have more risk of double
failures.  Maybe you could mitigate that with drives that are less
likely to fail at the same time, but I suspect you're better off
having enough redundancy to deal with the problem.

2.  Drive fails and the system becomes unstable/etc.  This is usually
a controller problem, and is probably less likely for better
controllers.  It could also be a kernel issue if the
driver/failesystem/etc doesn't handle the erroneous data.  I think the
only place you could impact this risk is with the controller, not the
drive.  If the drive sends garbage over the interface then the
controller should not pass along invalid data or allow that to
interface with functioning drives.

This is one of the reasons that I've been trying to move towards
lizardfs or other distributed filesystems.  This puts the redundancy
at the host level.  I can lose all the drives on a host, the host, its
controller, its power supply, or whatever, and nothing bad happens.
Typically in these systems drives aren't explicitly paired but data is
just generally pooled, so if data is lost the entire cluster starts
replicating it to return redundancy, and that rebuild gets split
across all hosts and starts instantly and not after you add a drive
unless you were running near-full.  One host replicating one 12TB
drive takes a lot longer than 10 hosts replicating 1.2TB each to
another host in parallel as long as your network switches can run at
full network capacity per host at the same time and you have no
bottlenecks.

-- 
Rich



Re: [gentoo-user] Re: SDD strategies...

2020-03-18 Thread antlists

On 17/03/2020 14:29, Grant Edwards wrote:

On 2020-03-17, Neil Bothwick  wrote:


Same here. The main advantage of spinning HDs are that they are cheaper
to replace when they fail. I only use them when I need lots of space.


Me too. If I didn't have my desktop set up as a DVR with 5TB of
recording space, I wouldn't have any spinning drives at all.  My
personal experience so far indicates that SSDs are far more reliable
and long-lived than spinning HDs.  I would guess that about half of my
spinning HDs fail in under 5 years.  But then again, I tend to buy
pretty cheap models.

If you rely on raid, and use spinning rust, DON'T buy cheap drives. I 
like Seagate, and bought myself Barracudas. Big mistake. Next time 
round, I bought Ironwolves. Hopefully that system will soon be up and 
running, and I'll see whether that was a good choice :-)


Cheers,
Wol



Re: [gentoo-user] SDD strategies...

2020-03-18 Thread antlists

On 17/03/2020 11:54, madscientistatlarge wrote:

The issue is not usually end of trusted life, but rather random failure.  I've 
barely managed to recover failed hard drives, That is less likely on SSD though 
possibly less likely to happen.


The drive may be less likely to fail, but I'd say raid or backups are a 
necessity.


From what I've heard, SSDs tend to go read-only when they fail (that's 
fine), BUT SELF-DESTRUCT ON A POWER CYCLE!!!


So don't bank on being able to access a failed SSD after a reboot.

Cheers,
Wol



Re: [gentoo-user] apache htaccess - block IP range

2020-03-18 Thread Alarig Le Lay
Hi,

On mar. 17 mars 15:47:36 2020, the...@sys-concept.com wrote:
> How to block in .htaccess file certain IP range?
> 
> I have bot from huawei.com on my server for several days:
> IP: 114.119.128.0 - 114.119.191.255
> Or just block all China

In a case like this, I’m not trying to bother about how to do this with
apache/nginx or another process, I’m blocking the traffic directly from
iptables:
iptables -s 114.119.128.0/19 -j DROP

-- 
Alarig



Re: [gentoo-user] Re: SDD, what features to look for and what to avoid.

2020-03-18 Thread David Haller
Hello, an addendum without digging up the details ...

On Tue, 17 Mar 2020, David Haller wrote:
>On Tue, 17 Mar 2020, Grant Edwards wrote:
>>I've put five Samsung SATA drives into various things in the past few
>>years with flawless results.  Samsung is one of the big manufacturers
>>of flash chips, so I figure they should always end up with 1st choice
>>quality chips in their own drives...
>
>And they produce and use their own controllers, so they additionally
>know the ins and outs of those, i.e. they can easily optimize the
>whole SSD from Flash-Chip over controller up to the firmware...
[..]
>AFAIgathered, Samsung is the only one producing the whole product.

I guess Intel did (still does?) that too, but you'll have to check
that, ISTR that Intel now sells SSDs with non-Intel controllers and/or
non-Intel/"IM-Flash" flash-chips... Oh, wait, yes, Intel still does,
but those "pure Intel" SSDs come with a *very* hefty price (like 4
times as much) and all the "normal" priced ones are those with either
and/or non Intel flash-chips and/or -controllers... But please go
check that yourselves though!

The second thing I remembered: the german "c't"[2] magazine did a
torture test in late 2018 (IIRC), basically grabbing a few then
current SSDs and run their own testtool[1] on them until they died. Or
so was the plan. That was a "write till it dies" test.

First of all: all SSD exceeded their specs, some IIRC just barely. The
bulk by a factor of 2 or more. ISTR some of those "just barely", but
wont name them without digging out the actual results, which I'll do
upon requests.

The test had one problem though: a (IIRC) Samsung 850 Pro just refused
to die ;) They aborted the test after something like over 4 months
(all other drives had died inside of about a month) of _continous_
writes (or write-verify cycles) to that one remaining SSD, which was
still happily chugging along...

I do remember though, that even the Samsung EVO came out at the top of
the bunch

(Note: c't does not award a "test-winner" or anything. Just data and
an conclusion aka "Fazit", the reader has to digest the data and make
up his own mind for _her/his_ own usecase).

All IIRC, I can dig out and translate the details though! (and it's
month's later followup on what became of that Samsung ;)

HTH, and please do PM (no need to clog the ML) if you want me to go
digging for the details,
-dnh

[1] which name escapes me ATM, but tried and tested since 199[0-5] or
so ;)

[2] https://en.wikipedia.org/wiki/C%27t (that page is sadly woefully
outdated)

-- 
"If you are using an Macintosh e-mail program that is not from Microsoft,
we recommend checking with that particular company. But most likely other
e-mail programs like Eudora are not designed to enable virus replication"
 -- http://www.microsoft.com/mac/products/office/2001/virus_alert.asp