Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-18 Thread D. Hugh Redelmeier via talk


| From: Lennart Sorensen via talk 
| 
| > I guess that would mean that scattering unused space on an SSD between 
| > the partions, means the controller probably sees it as being used. I 
| > left chunks allocated at the ends of the drives as recommended. I was 
| > just wondering if my stripes would increase that wear level 
| > capability, as well as providing for emergency recovery space(s).
| 
| Trying to guess how a drive does its wear leveling is impossible.
| Even if you are buying SSDs directly from the manufacturer and have a
| relationship with them, they usually won't tell you how it works.

There are some things that are pretty well-known at the moment.  Things 
may change in the future.

| Usually the drive has some extra space by design that it can use as a
| pool for writes, and then the old blocks are erased and put into the pool.
| If you use trim, you can add currently unused space in the filesystem to
| that free pool too.  Some drives will occationally move data that never
| changes from blocks that have very few writes to blocks that are more work
| in the hopes that it will then be able to use those better blocks for more
| frequently changing data, but simpler drives may not do such housekeeping.

I don't understand what you are saying here.

What you want to avoid is "write amplification".  Every write to the 
device will cause at least one flash write.  But some cause a lot more, 
and you want to reduce that effect.  We care about the average write
amplification since it is lumpy.

Write amplification gets really bad when a drive is too full.  And it
isn't too bad when there's a fair bit of free space (for normal
workloads).  Counter-intuitively, the graph of this is sort of a
hockey stick.  The transition from good to bad is fairly sharp.

All drives have more than the OS-visible space (overprovisioning).  Cheap 
drives have less than "enterprise" drives.

By leaving some of your disk unused, in a way that the drive firmware
knows, adds to overprovisioning.  I think it's a reasonable idea.

None of this has any effect on reads.  Well, if the drive is busy
doing garbage collection or write amplified writes, I guess reading
would be slowed down.

Erase blocks, the unit of erasing in the raw hardware, are quite a
lot bigger than filesystem blocks.  (If they were not, there would be
no reason for write amplification.)

When the drive firmware gets a write request, it needs to have an
empty (erased) block to write it to.  If it doesn't have one, it must
find one, using garbage collection.  That generates more writes behind
the scenes.

Remember: when you rewrite a block, the flash block cannot be
rewritten in situ since the raw operation for writing is actually a
NAND, so the block needs to be erased.  You cannot erase a block
without erasing a lot of adjacent blocks: a whole erase block.

| There really isn't any way to know, unless they choose to advertise it.
| Of course it is likely a drive with a much higher promised number of
| write cycles likely is doing smarter housekeeping to keep block wear as
| even as possible.

More things that I think we know:

I think that the housekeeping is fairly well understood.  But they
keep adding tricks.

Some SSDs have RAM buffers.  Short-lived blocks might never hit the
flash memory.  Probaly only expesive / enterprise drives these days.

Some SSDs use a portion of flash in pseudo-SLC mode for buffering.  I
don't know exactly how it is used but one could imagine it is like the
RAM buffer would be.

SLC flash stores one bit per flash cell.  MLC stores several bits per 
cell, but in common usage it is 2 bits per cell.  TLC stores three bits 
per cell.

SLC is less dense (obviously) so it is more expensive and it seems to
be impossible to get these days but it had great speed and reliability
advantages.

Older drives are MLC, newer ones are TLC.

What I wonder about:

How stable is flash.  There are hints that it needs to be refreshed once 
in a while (months?  years?).  Is this automatically done?

Cheap SSDs can be corrupted in a power failure.  More expensive ones
have a bit of power reserve (supercapacitor?) to put your data to bed
before powering down.  How can you know which kind you are buying?  Why is 
this not a scandal?

| I am not currently convinced that keeping unallocated space is worth it.
| Sure you make the free pool a bit larger, but you still end up writing
| the same amount of blocks and you make the usable size smaller.  Having a
| larger free pool might help for systems that do a lot of writes since
| you are more likely to be able to have a free block to do a write,
| while the drive hasn't had time to erase the old blocks.  On the other
| hand if you are doing enough writing that it could be a proble, maybe
| and SSD is the wrong type of drive to be using.

The hole in that logic is that erase blocks are a lot larger than
filesystem blocks.  So you potentially end up with a bunch of erase
blocks with 

Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-18 Thread Russell via talk


On April 18, 2018 8:35:41 AM CDT, lsore...@csclub.uwaterloo.ca wrote:
>> I guess that would mean that scattering unused space on an SSD
>between the partions, means the controller probably sees it as being
>used. I left chunks allocated at the ends of the drives as recommended.
>I was just wondering if my stripes would increase that wear level
>capability, as well as providing for emergency recovery space(s).  
>
>Trying to guess how a drive does its wear leveling is impossible.
>Even if you are buying SSDs directly from the manufacturer and have a
>relationship with them, they usually won't tell you how it works.
>
>Usually the drive has some extra space by design that it can use as a
>pool for writes, and then the old blocks are erased and put into the
>pool.
>If you use trim, you can add currently unused space in the filesystem
>to
>that free pool too.  Some drives will occationally move data that never
>changes from blocks that have very few writes to blocks that are more
>work
>in the hopes that it will then be able to use those better blocks for
>more
>frequently changing data, but simpler drives may not do such
>housekeeping.
>There really isn't any way to know, unless they choose to advertise it.
>Of course it is likely a drive with a much higher promised number of
>write cycles likely is doing smarter housekeeping to keep block wear as
>even as possible.
>
>I am not currently convinced that keeping unallocated space is worth
>it.
>Sure you make the free pool a bit larger, but you still end up writing
>the same amount of blocks and you make the usable size smaller.  Having
>a
>larger free pool might help for systems that do a lot of writes since
>you are more likely to be able to have a free block to do a write,
>while the drive hasn't had time to erase the old blocks.  On the other
>hand if you are doing enough writing that it could be a proble, maybe
>and SSD is the wrong type of drive to be using.
>
>I have all my SSDs fully allocated and see no reason to do otherwise.
>Some people have some crazy theories that often have no facts behind
>them.
>They just assume the drive makers are dumb and haven't thought of this
>amazing problem that they just thought of.  Of course some of the
>really
>cheap drives really are that dumb.

Thats what I tell people about my Phone. Its a smart phone. It just has a dumb 
operator. :-)

I've always left bits of HDDs unalocated for emergency recovery installs. Live 
distros on usb make that provision unnecesary. Hugh also posted an interesting 
link to a page, I think it was on  blkdiscard for thinly provisioned SSD's. 
Haven't gone there yet, but soon will. 

Thanks for your followup.
>
>-- 
>Len Sorensen

-- 
Russell
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-18 Thread Lennart Sorensen via talk
On Wed, Apr 18, 2018 at 09:23:12AM -0400, Jamon Camisso via talk wrote:
> Try bonnie++ a few times on each install. It is explicitly designed to
> test drive performance.

Yeah that is a good test.

-- 
Len Sorensen
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-18 Thread Lennart Sorensen via talk
On Wed, Apr 18, 2018 at 06:35:57AM -0500, Russell wrote:
> It wrote a bunch of zeros to a virtual file. Perhaps even touching a tmp file 
> along the way. Even if it didnt touch tmp, it wrote the zeros someplace in 
> order to perform the count.

No it passed a bunch of zeros through a pipe to md5sum.  All ram.
No virtual files involved in pipes at all.

> I was just trying to comment on the speeds of the two installs relative to 
> the respective  disks the OS runs from. I'm sorry you didn't understand that. 
> Perhaps I should have said running the OS from the two different drives, 
> irrespective of all the other disk writes which may happen when the OS 
> operates normally when calling dd from a GUI.

Well nothing in the test depends on the disk.  The kernel is in ram,
so /dev/zero is in ram, dd reads from ram, writes to a pipe (ram), which
is read by md5sum (from ram).  It does not create the whole 1GB of zeros
before passing it to md5sum.  pipes pass data as it is ready and ussually
only allow a few KB of buffering between processes.  It does not buffer
to a file.  If md5sum can't keep up, dd will be blocked and pause until
there is more room in the pipe.

> Often tests provide side channel results which are not part of the expected 
> normal metric but quantifiable data arises none the less.

Well that test is essentially a ram speed test.

I tried it on my machine with a bunch of slow disks, and got just a
touch less.  I guess your ram is newer than mine.

-- 
Len Sorensen
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-18 Thread Russell via talk


On April 18, 2018 8:23:12 AM CDT, Jamon Camisso via talk  
wrote:
>On 2018-04-18 07:35 AM, Russell via talk wrote:
>> 
>> 
>> On April 17, 2018 9:02:14 AM CDT, lsore...@csclub.uwaterloo.ca wrote:
>>> On Tue, Apr 17, 2018 at 08:20:47AM -0400, Russell via talk wrote:
 Currently I have two versions of the same os on the same machine.
>One
>>> on M.2 Xpoint nvram and one on a standard SSD. I'm playing around
>with
>>> tweaking before I do a final config. So far the Xpoint direct hw
>access
>>> appears 3x as fast as the SSD while real world throughput shows up
>>> about twice as fast on the Xpoint, recent INTEL cache fencing
>>> notwithstanding.

 dd if=/dev/zero bs=1M count=1024 | md5sum
 1024+0 records in
 1024+0 records out
 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.35008 s, 795 MB/s
 cd573cfaace07e7949bc0c46028904ff  -

 795 is just under twice as fast as writing to the conventional SSD.
>>>
>>> That command didn't write anything to anywhere.
>> 
>> It wrote a bunch of zeros to a virtual file. Perhaps even touching a
>tmp file along the way. Even if it didnt touch tmp, it wrote the zeros
>someplace in order to perform the count.
>> 
>> I was just trying to comment on the speeds of the two installs
>relative to the respective  disks the OS runs from. I'm sorry you
>didn't understand that. Perhaps I should have said running the OS from
>the two different drives, irrespective of all the other disk writes
>which may happen when the OS operates normally when calling dd from a
>GUI.
>
>Try bonnie++ a few times on each install. It is explicitly designed to
>test drive performance.
>
>Cheers, Jamon

Thanks, I downloaded it and man bonnie++ is my transit read today.
>---
>Talk Mailing List
>talk@gtalug.org
>https://gtalug.org/mailman/listinfo/talk

-- 
Russell
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-18 Thread Lennart Sorensen via talk
> I guess that would mean that scattering unused space on an SSD between the 
> partions, means the controller probably sees it as being used. I left chunks 
> allocated at the ends of the drives as recommended. I was just wondering if 
> my stripes would increase that wear level capability, as well as providing 
> for emergency recovery space(s).  

Trying to guess how a drive does its wear leveling is impossible.
Even if you are buying SSDs directly from the manufacturer and have a
relationship with them, they usually won't tell you how it works.

Usually the drive has some extra space by design that it can use as a
pool for writes, and then the old blocks are erased and put into the pool.
If you use trim, you can add currently unused space in the filesystem to
that free pool too.  Some drives will occationally move data that never
changes from blocks that have very few writes to blocks that are more work
in the hopes that it will then be able to use those better blocks for more
frequently changing data, but simpler drives may not do such housekeeping.
There really isn't any way to know, unless they choose to advertise it.
Of course it is likely a drive with a much higher promised number of
write cycles likely is doing smarter housekeeping to keep block wear as
even as possible.

I am not currently convinced that keeping unallocated space is worth it.
Sure you make the free pool a bit larger, but you still end up writing
the same amount of blocks and you make the usable size smaller.  Having a
larger free pool might help for systems that do a lot of writes since
you are more likely to be able to have a free block to do a write,
while the drive hasn't had time to erase the old blocks.  On the other
hand if you are doing enough writing that it could be a proble, maybe
and SSD is the wrong type of drive to be using.

I have all my SSDs fully allocated and see no reason to do otherwise.
Some people have some crazy theories that often have no facts behind them.
They just assume the drive makers are dumb and haven't thought of this
amazing problem that they just thought of.  Of course some of the really
cheap drives really are that dumb.

-- 
Len Sorensen
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-18 Thread Jamon Camisso via talk
On 2018-04-18 07:35 AM, Russell via talk wrote:
> 
> 
> On April 17, 2018 9:02:14 AM CDT, lsore...@csclub.uwaterloo.ca wrote:
>> On Tue, Apr 17, 2018 at 08:20:47AM -0400, Russell via talk wrote:
>>> Currently I have two versions of the same os on the same machine. One
>> on M.2 Xpoint nvram and one on a standard SSD. I'm playing around with
>> tweaking before I do a final config. So far the Xpoint direct hw access
>> appears 3x as fast as the SSD while real world throughput shows up
>> about twice as fast on the Xpoint, recent INTEL cache fencing
>> notwithstanding.
>>>
>>> dd if=/dev/zero bs=1M count=1024 | md5sum
>>> 1024+0 records in
>>> 1024+0 records out
>>> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.35008 s, 795 MB/s
>>> cd573cfaace07e7949bc0c46028904ff  -
>>>
>>> 795 is just under twice as fast as writing to the conventional SSD.
>>
>> That command didn't write anything to anywhere.
> 
> It wrote a bunch of zeros to a virtual file. Perhaps even touching a tmp file 
> along the way. Even if it didnt touch tmp, it wrote the zeros someplace in 
> order to perform the count.
> 
> I was just trying to comment on the speeds of the two installs relative to 
> the respective  disks the OS runs from. I'm sorry you didn't understand that. 
> Perhaps I should have said running the OS from the two different drives, 
> irrespective of all the other disk writes which may happen when the OS 
> operates normally when calling dd from a GUI.

Try bonnie++ a few times on each install. It is explicitly designed to
test drive performance.

Cheers, Jamon
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-18 Thread Russell via talk


On April 17, 2018 9:02:14 AM CDT, lsore...@csclub.uwaterloo.ca wrote:
>On Tue, Apr 17, 2018 at 08:20:47AM -0400, Russell via talk wrote:
>> Currently I have two versions of the same os on the same machine. One
>on M.2 Xpoint nvram and one on a standard SSD. I'm playing around with
>tweaking before I do a final config. So far the Xpoint direct hw access
>appears 3x as fast as the SSD while real world throughput shows up
>about twice as fast on the Xpoint, recent INTEL cache fencing
>notwithstanding.
>> 
>> dd if=/dev/zero bs=1M count=1024 | md5sum
>> 1024+0 records in
>> 1024+0 records out
>> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.35008 s, 795 MB/s
>> cd573cfaace07e7949bc0c46028904ff  -
>> 
>> 795 is just under twice as fast as writing to the conventional SSD.
>
>That command didn't write anything to anywhere.

It wrote a bunch of zeros to a virtual file. Perhaps even touching a tmp file 
along the way. Even if it didnt touch tmp, it wrote the zeros someplace in 
order to perform the count.

I was just trying to comment on the speeds of the two installs relative to the 
respective  disks the OS runs from. I'm sorry you didn't understand that. 
Perhaps I should have said running the OS from the two different drives, 
irrespective of all the other disk writes which may happen when the OS operates 
normally when calling dd from a GUI.
>
>It tests how fast md5sum can calculate the checksum of 1GB of zeroes.
>
>Certainly in no way testing any disk speed.  Reasonable test of CPU and
>ram speed perhaps.

Often tests provide side channel results which are not part of the expected 
normal metric but quantifiable data arises none the less.

My appologies for the misunderstanding. 
>
>-- 
>Len Sorensen

-- 
Russell
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-17 Thread Russell via talk


On April 17, 2018 10:30:25 AM CDT, "D. Hugh Redelmeier via talk" 
 wrote:
>| From: Russell via talk 
>
>| >| From: Giles Orr via talk 
>
>| >| I'm with Len - simplify if you can.  Although Unlike him, I
>believe you
>| >| should have at least two (Linux) OS partitions - if one is messed
>up, you
>| >| can boot from the other to fix it.  And I've also - more than once
>-
>| 
>| I also follow this practice. In fact in my current build, I'm looking
>at 
>| overprovisioning my SSD using small fencing stripes. This would so as
>to 
>| be able to gain several spaces on the disk which I could format in an
>
>| emergency. I can then recover a backup of the superblock and realign 
>| things. In theory anyway.
>
>"Overprovisioning" can mean many things, but it has a specific meaning
>in 
>terms of SSD wear leveling.
>
>Some system-visible space that is not being used can only be considered
>
>overprovisioning (in the SSD wear leveling sense) if the drive's 
>controller "knows" it is unused.

I guess that would mean that scattering unused space on an SSD between the 
partions, means the controller probably sees it as being used. I left chunks 
allocated at the ends of the drives as recommended. I was just wondering if my 
stripes would increase that wear level capability, as well as providing for 
emergency recovery space(s).  
>
>I haven't carefully read this but it might give answers on how to get 
>empty bits between partitions into the free block pool of the SSD 
>controller:
>

I'll check this out for sure. Thanks again.
>---
>Talk Mailing List
>talk@gtalug.org
>https://gtalug.org/mailman/listinfo/talk

-- 
Russell
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-17 Thread Russell via talk


On April 17, 2018 10:19:18 AM CDT, "D. Hugh Redelmeier via talk" 
 wrote:
>| From: Russell via talk 
>
>| On April 11, 2018 7:02:56 PM EDT, "D. Hugh Redelmeier via talk"
> wrote:
>
>| When I first started my switch from DOS to *nix, I was told you 
>| absolutely don't want to run two versions of init on the same
>machine. I 
>| believe this is why userland programming uses telinit. It seems to me
>
>| that not letting different distros share a home is a pretty sound
>idea, 
>| even if it is based on superstition.
>| 
>| I forget the exact reasons I was given for always using telinit.
>However 
>| given the fine granularity and ballistic nature of the bits and dword
>
>| bytes, I assume that it could be catastrophic to request pid1 and 
>| receive pid 1001. The audit trail to follow for recovery would be
>hard 
>| to follow without being able to distinguish the id as being from 
>| userland rather than kernelspace.
>
>I don't understand this at all.  Perhaps it doesn't matter now that we
>all use systemd.  I don't even have local man pages for this stuff.

I gave away my old copies of Unix Unleashed to other *nix newbies over time. 
Most of my intuitions around these processes come from playing catchup by 
pouring over old usenet posts and reading ancedotes and incidents. My poor 
technological language is my own.

>
>There could only be one init process (PID 1).  But you could issue
>"init" as a command or "telinit" as a command.  Both would do stuff
>and then tap the init process on the shoulder (using signals) to
>change its state.
>
>The init command was in the original Unix systems, not telinit.  7th
>Edition doesn't have telinit.  Something later (System 3?  BSD?)
>introduced telinit.
>
>| >Technically you can have more than one EFI System Partition on a
>drive
>| >but don't do this.  I did this by accident and had a few problems.
>| 
>| Out of curiosity, could you say what type of problems they were?
>
>I mostly don't remember.  The computer wasn't mine and so I didn't get
>to observe it systematically.
>
>| >Windows cannot handle this case and firmware setup screens may be no
>| >better.  I don't know of any upside.
>
>That lists two of the problems.
>
>I accidentally had a fresh Fedora install create a second EFI System
>Partition.  It was a too-easy mistake to make ("a poor workman blames
>his tools").

Fedora's alignment with X.509 certs and EFI booting seems to be pretty well 
handled under systemd. I haven't tried Debian on my current build yet. Its my 
first kick at UEFI so I'm taking things slowly.

>
>I never understood why grub offered me the choices it did since
>subsequent Fedora updates didn't always use the same ESP as the UEFI
>Firmware booted from.  I never understood how to control either in a
>way that stuck.
>
>Once I knew what the problem was, and had enough time, I fixed it
>rather than played with it.
>
>I booted from a live stick, copied things I wanted from the second ESP
>to the first, and denatured the second so it would not be found by the
>firmware or the OS.  I must have updated the UUID of the /etc/fstab
>entry for /boot/efi.  I chose to keep the ESP that Windows knew about
>because changing Windows' mind about something like that is hard.

Thats a handy method, good to know about if it happens, thanks.

Dominance is as dominance does. Fat 32 is Microsoft's playground, so its still 
their rules for now. 

>---
>Talk Mailing List
>talk@gtalug.org
>https://gtalug.org/mailman/listinfo/talk

-- 
Russell
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-17 Thread D. Hugh Redelmeier via talk
| From: Russell via talk 

| >| From: Giles Orr via talk 

| >| I'm with Len - simplify if you can.  Although Unlike him, I believe you
| >| should have at least two (Linux) OS partitions - if one is messed up, you
| >| can boot from the other to fix it.  And I've also - more than once -
| 
| I also follow this practice. In fact in my current build, I'm looking at 
| overprovisioning my SSD using small fencing stripes. This would so as to 
| be able to gain several spaces on the disk which I could format in an 
| emergency. I can then recover a backup of the superblock and realign 
| things. In theory anyway.

"Overprovisioning" can mean many things, but it has a specific meaning in 
terms of SSD wear leveling.

Some system-visible space that is not being used can only be considered 
overprovisioning (in the SSD wear leveling sense) if the drive's 
controller "knows" it is unused.

I haven't carefully read this but it might give answers on how to get 
empty bits between partitions into the free block pool of the SSD 
controller:

---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-17 Thread D. Hugh Redelmeier via talk
| From: Russell via talk 

| On April 11, 2018 7:02:56 PM EDT, "D. Hugh Redelmeier via talk" 
 wrote:

| When I first started my switch from DOS to *nix, I was told you 
| absolutely don't want to run two versions of init on the same machine. I 
| believe this is why userland programming uses telinit. It seems to me 
| that not letting different distros share a home is a pretty sound idea, 
| even if it is based on superstition.
| 
| I forget the exact reasons I was given for always using telinit. However 
| given the fine granularity and ballistic nature of the bits and dword 
| bytes, I assume that it could be catastrophic to request pid1 and 
| receive pid 1001. The audit trail to follow for recovery would be hard 
| to follow without being able to distinguish the id as being from 
| userland rather than kernelspace.

I don't understand this at all.  Perhaps it doesn't matter now that we
all use systemd.  I don't even have local man pages for this stuff.

There could only be one init process (PID 1).  But you could issue
"init" as a command or "telinit" as a command.  Both would do stuff
and then tap the init process on the shoulder (using signals) to
change its state.

The init command was in the original Unix systems, not telinit.  7th
Edition doesn't have telinit.  Something later (System 3?  BSD?)
introduced telinit.

| >Technically you can have more than one EFI System Partition on a drive
| >but don't do this.  I did this by accident and had a few problems.
| 
| Out of curiosity, could you say what type of problems they were?

I mostly don't remember.  The computer wasn't mine and so I didn't get
to observe it systematically.

| >Windows cannot handle this case and firmware setup screens may be no
| >better.  I don't know of any upside.

That lists two of the problems.

I accidentally had a fresh Fedora install create a second EFI System
Partition.  It was a too-easy mistake to make ("a poor workman blames
his tools").

I never understood why grub offered me the choices it did since
subsequent Fedora updates didn't always use the same ESP as the UEFI
Firmware booted from.  I never understood how to control either in a
way that stuck.

Once I knew what the problem was, and had enough time, I fixed it
rather than played with it.

I booted from a live stick, copied things I wanted from the second ESP
to the first, and denatured the second so it would not be found by the
firmware or the OS.  I must have updated the UUID of the /etc/fstab
entry for /boot/efi.  I chose to keep the ESP that Windows knew about
because changing Windows' mind about something like that is hard.
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-17 Thread Lennart Sorensen via talk
On Tue, Apr 17, 2018 at 08:20:47AM -0400, Russell via talk wrote:
> Currently I have two versions of the same os on the same machine. One on M.2 
> Xpoint nvram and one on a standard SSD. I'm playing around with tweaking 
> before I do a final config. So far the Xpoint direct hw access appears 3x as 
> fast as the SSD while real world throughput shows up about twice as fast on 
> the Xpoint, recent INTEL cache fencing notwithstanding.
> 
> dd if=/dev/zero bs=1M count=1024 | md5sum
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.35008 s, 795 MB/s
> cd573cfaace07e7949bc0c46028904ff  -
> 
> 795 is just under twice as fast as writing to the conventional SSD.

That command didn't write anything to anywhere.

It tests how fast md5sum can calculate the checksum of 1GB of zeroes.

Certainly in no way testing any disk speed.  Reasonable test of CPU and
ram speed perhaps.

-- 
Len Sorensen
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-17 Thread Russell via talk
On April 11, 2018 7:02:56 PM EDT, "D. Hugh Redelmeier via talk" 
 wrote:
>| From: Giles Orr via talk 
>
>Clunk Clunk Clunk (I'm nodding my head).
>
>| I'm with Len - simplify if you can.  Although Unlike him, I believe
>you
>| should have at least two (Linux) OS partitions - if one is messed up,
>you
>| can boot from the other to fix it.  And I've also - more than once -

I also follow this practice. In fact in my current build, I'm looking at 
overprovisioning my SSD using small fencing stripes. This would so as to be 
able to gain several spaces on the disk which I could format in an emergency. I 
can then recover a backup of the superblock and realign things. In theory 
anyway.

>had to
>| tinker with two OSes (usually Debian vs. Fedora) to figure out which
>worked
>| best on a particular machine.  So I always have at least two OS

Currently I have two versions of the same os on the same machine. One on M.2 
Xpoint nvram and one on a standard SSD. I'm playing around with tweaking before 
I do a final config. So far the Xpoint direct hw access appears 3x as fast as 
the SSD while real world throughput shows up about twice as fast on the Xpoint, 
recent INTEL cache fencing notwithstanding.

dd if=/dev/zero bs=1M count=1024 | md5sum
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.35008 s, 795 MB/s
cd573cfaace07e7949bc0c46028904ff  -

795 is just under twice as fast as writing to the conventional SSD.

>
>I used to always have two / partitions for two separate OSes.  When a
>new OS release came out, I always did a fresh install into the other /
>partition.  This meant that the old system could still be run.  Now
>I've gotten a bit lazy and do upgrades in place.  Still, having space
>for a separate installation is comforting.
>
>Fedora seems to have been trustable with upgrades-in-place for a few
>years.

I'm currently on Fedora 27 with Gnome using the Nouveau driver. I usually never 
automatically update but while I sort this new box and throughout the Spectre 
stuff, updates are automatic. This release had both the gnome update notifier 
and dnfdragora enabled by default which was confusing at first but I got used 
to it.

>According to Lennart, debian has been trustable for a long long time.

I was frozen at 2.6 on Debian till 2010 or so. I didn't automatically upgrade 
during that time but as I recall when I did there were few problems. Notably 
the introduction of pulse audio and ongoing issues with xsane and colord 
profiles. Although recently I switched back to RH for myself. I did this once 
it looked like SElinux was sorted in respect of systemd. I made the switch, 
mostly to align myself more in keeping with FOSS libraries.

>
>|  And in the name of simplicity, each OS partition includes its
>| own /var, /usr, /usr/local ... the only separate partitions are swap
>and
>| /home, because I want that to be separate and accessible to each of
>the OS
>| partitions - and separate and not affected by OS upgrades.
>
>Superstitiously, I won't let different distros share a /home.  I fear
>a conflicting set of config files.  I don't know that this is a
>problem, I just don't really want to find out.

When I first started my switch from DOS to *nix, I was told you absolutely 
don't want to run two versions of init on the same machine. I believe this is 
why userland programming uses telinit. It seems to me that not letting 
different distros share a home is a pretty sound idea, even if it is based on 
superstition.

I forget the exact reasons I was given for always using telinit. However given 
the fine granularity and ballistic nature of the bits and dword bytes, I assume 
that it could be catastrophic to request pid1 and receive pid 1001. The audit 
trail to follow for recovery would be hard to follow without being able to 
distinguish the id as being from userland rather than kernelspace.

>
>For this reason, I don't tend to let /home fill the drive.  I invent
>another filesystem to occupy any spare space.  Usually /space.

I use /DATA, using caps is the way I remind myself, at a glance, that I created 
the space. 
>
>|  These days it
>| seems you want a /boot partition though - but I'm not the one to
>explain
>| the ins and outs of that.
>
>I've not seen a use for a /boot partition.
>
>With UEFI booting, you need a separate EFI System Partition.  This
>will be shared by all systems that boot off that drive.  This gets
>mounted on the mount point /boot/efi.  It will be some variant of FAT
>but the partition type will be distinct.
>
>Technically you can have more than one EFI System Partition on a drive
>but don't do this.  I did this by accident and had a few problems.

Out of curiosity, could you say what type of problems they were?

>Windows cannot handle this case and firmware setup screens may be no
>better.  I don't know of any upside.
>---
>Talk Mailing List
>talk@gtalug.org
>https://gtalug.org/mailman/listinfo/talk

-- 
Russell
---

Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-11 Thread Giles Orr via talk
On 11 April 2018 at 23:02, D. Hugh Redelmeier via talk 
wrote:

> | From: Giles Orr via talk 
> |  These days it
> | seems you want a /boot partition though - but I'm not the one to explain
> | the ins and outs of that.
>
> I've not seen a use for a /boot partition.
>
> With UEFI booting, you need a separate EFI System Partition.  This
> will be shared by all systems that boot off that drive.  This gets
> mounted on the mount point /boot/efi.  It will be some variant of FAT
> but the partition type will be distinct.
>

To correct my own post based on what Hugh said ... I was both right and
horribly wrong about that.  I was entirely correct "I'm not the one to
explain [this]."  And horribly wrong: what you usually want is what Hugh
said: an EFI System Partition.  I'd conflated that with a /boot/ partition
because it appears there.  My apologies.

-- 
Giles
https://www.gilesorr.com/
giles...@gmail.com
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-11 Thread D. Hugh Redelmeier via talk
| From: Giles Orr via talk 

Clunk Clunk Clunk (I'm nodding my head).

| I'm with Len - simplify if you can.  Although Unlike him, I believe you
| should have at least two (Linux) OS partitions - if one is messed up, you
| can boot from the other to fix it.  And I've also - more than once - had to
| tinker with two OSes (usually Debian vs. Fedora) to figure out which worked
| best on a particular machine.  So I always have at least two OS

I used to always have two / partitions for two separate OSes.  When a
new OS release came out, I always did a fresh install into the other /
partition.  This meant that the old system could still be run.  Now
I've gotten a bit lazy and do upgrades in place.  Still, having space
for a separate installation is comforting.

Fedora seems to have been trustable with upgrades-in-place for a few years.
According to Lennart, debian has been trustable for a long long time.

|  And in the name of simplicity, each OS partition includes its
| own /var, /usr, /usr/local ... the only separate partitions are swap and
| /home, because I want that to be separate and accessible to each of the OS
| partitions - and separate and not affected by OS upgrades.

Superstitiously, I won't let different distros share a /home.  I fear
a conflicting set of config files.  I don't know that this is a
problem, I just don't really want to find out.

For this reason, I don't tend to let /home fill the drive.  I invent
another filesystem to occupy any spare space.  Usually /space.

|  These days it
| seems you want a /boot partition though - but I'm not the one to explain
| the ins and outs of that.

I've not seen a use for a /boot partition.

With UEFI booting, you need a separate EFI System Partition.  This
will be shared by all systems that boot off that drive.  This gets
mounted on the mount point /boot/efi.  It will be some variant of FAT
but the partition type will be distinct.

Technically you can have more than one EFI System Partition on a drive
but don't do this.  I did this by accident and had a few problems.
Windows cannot handle this case and firmware setup screens may be no
better.  I don't know of any upside.
---
Talk Mailing List
talk@gtalug.org
https://gtalug.org/mailman/listinfo/talk


Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-11 Thread Giles Orr via talk
On 11 April 2018 at 11:12, Lennart Sorensen via talk 
wrote:

> On Wed, Apr 11, 2018 at 09:58:05AM -0400, Steve Petrie, P.Eng. via talk
> wrote:
> > Warm Greetings To GTALUG Members,
> >
> > This coming weekend (Friday 13 April 2018) I will be building my new
> desktop PC with the help of my friend who has built quite a few PCs for his
> employer.
> >
> > The new PC will be running debian Linux and will soon take over duties
> from an ancient Dell desktop PC running Windows XP (acquired new in March
> 2005)..
> >
> > I will post the final hardware configuration on PCPartPicker once the
> new PC is operational.
> >
> > * * *
> > * * *
> >
> > Meanwhile, I would like to ask GTALUG members please to itake a look at
> the partitioning configuraiton I am proposing for the 2 TB Western Digital
> HDD (best to stretch your email client window to defeat word wrap):
> >   ==> *** STANDARD LINUX ***
> >
> >  /device partition
> >
> >   ==> linux normal boot #1: (current active version of linux os, will be
> recycled for next version)
> >  /dev/sda1   gpt001  ext2???/boot
> >  /dev/sda2   gpt002  ext3 50/ (root), /bin, /dev, /etc,
> /initrd, /lib, sbin
> >
> >   ==> linux normal boot #2: (next version of linux os, will become
> current version)
> >  /dev/sda1   gpt003  ext2???/boot
> >  /dev/sda2   gpt004  ext3 50/ (root), /bin, /dev, /etc,
> /initrd, /lib, sbin
> >
> >   ==> linux rescue boot:
> >  /dev/sda1   gpt005  ext2???/boot
> >  /dev/sda2   gpt006  ext3 50/ (root), /bin, /dev, /etc,
> /initrd, /lib, sbin
> >   
> -
> >   150 GB + 3X boot
> >
> >   ==> linux temporary:
> >  /dev/sda3   gpt103  ext4 64(swap1)
> >  /dev/sda4   gpt104  ext4 64(swap2)
> >  /dev/sda5   gpt105  ext4 64(swap3)
> >  /dev/sda6   gpt106  ext4200/tmp
> >   -
> >   392 GB
> >
> >   ==> linux permanent:
> >  /dev/sda7   gpt207  ext4100/var
> >  /dev/sda8   gpt208  ext4100/usr
> >
> >   ==> linux user permanent:
> >  /dev/sda9   gpt309  ext4100/usr/local
> >  /dev/sda10  gpt310  ext4100/home
> >   
> >   400 GB
> >
> >
> >   ==> *** USER-DEFINED ***
> >
> >  /dev/sda51  gpt551..557 ext4 75X7  /!_d ... /!_j (current,
> clone winxp partition structure, allow for growth)
> >  /dev/sda52  gpt599  ext4 70/!~dell (WinXP archive C..J:
> ../winxp_c .. ../winxp_j (WinXP archive C..J))
> >   
> -
> >   595 GB
> >
> >   ==> other operating systems:
> >  /dev/sda61  gpt661  ext4???/._win7   virtualized
> windows 7
> >  /dev/sda62  gpt662  ext4???/._win7_1
> >  /dev/sda63  gpt663  ext4???/._dfly   virtualized
> dragonflybsd
> >  /dev/sda64  gpt664  ext4???/._dfly_1
> >   
> 
> >   ??? GB
> >
> >   ==> ssd partitions:
> >  /dev/sda71  gpt771  ext4 --/.~ssd01   (ssd partition) |
> total ssd
> >  /dev/sda72  gpt772  ext4 --/.~ssd02   (ssd partition) |
> capacity
> >  /dev/sda73  gpt773  ext4 --/.~ssd03   (ssd partition)
> |  256 GB
> >  -   -      -
>  ---
> >   N/A GB
> >
> >   ==> allocated:1537 GB (+ 3x boot)
> >   ==> unallocated: + 463 GB (- 3x boot)
> >  -   -      -
>  ---
> >   ==> Total HDD Capacity:   2000 GB
> > Note 1: Please be aware that I am a complete Linux newbie but with a
> software engineering background.
> >
> > Note 2: Hoping to be able to swap back and forth between an "active"
> version of Linux and the "next" version of Linux, by switching the roles of
> partitions   (gpt001, gpt002) <==> (gpt003,gpt004) .
> >
> > Note 3: Please be aware that I intend to maintain most of my
> user-related content in the seven (7) partitions gpt551..gpt557
> >
> > * * *
> > * * *
> >
> > Comments, criticisms, questions welcome.
>
> Do you actually work on your computer or do you spend all day shuffling
> bits of old OSs around?
>
> Where is the UEFI boot partition?
>
> I would never waste time or space on a rescue boot.  I have USB keys
> for that.
>
> I keep one OS linux installed and maintained.  I have never had a problem
> upgrading that needed a reinstall.  My Debian 2.0 install lasted until
> 486 support was dropped from Debian.  I forget what version that
> eventually was.  I keep one windows install.  I 

Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-11 Thread Bob Jonkman via talk
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Steve: You're overthinking this.

Boot from install media, accept the defaults, install. This works for
99% of the people. Once you've got a bit of familiarity with the
system you'll probably want to re-do the installation anyway, no
matter what you started with.

Keep your old system around until you find you're no longer going back
to it for those few things you do that are just too different in
GNU/Linux from the way Windows does it.

- --Bob, who always considers V1.0 a throwaway.



On 2018-04-11 09:58 AM, Steve Petrie, P.Eng. via talk wrote:
> Warm Greetings To GTALUG Members,
> 
> This coming weekend (Friday 13 April 2018) I will be building my
> new desktop PC with the help of my friend who has built quite a few
> PCs for his employer.
> 
> The new PC will be running debian Linux and will soon take over
> duties from an ancient Dell desktop PC running Windows XP (acquired
> new in March 2005)..
> 
> I will post the final hardware configuration on PCPartPicker once
> the new PC is operational.
> 
> * * * * * *
> 
> Meanwhile, I would like to ask GTALUG members please to itake a
> look at the partitioning configuraiton I am proposing for the 2 TB
> Western Digital HDD (best to stretch your email client window to
> defeat word wrap): ==> *** STANDARD LINUX ***
> 
> /device partition
> 
> ==> linux normal boot #1: (current active version of linux os, will
> be recycled for next version) /dev/sda1   gpt001  ext2???
> /boot /dev/sda2   gpt002  ext3 50/ (root), /bin, /dev,
> /etc, /initrd, /lib, sbin
> 
> ==> linux normal boot #2: (next version of linux os, will become
> current version) /dev/sda1   gpt003  ext2???/boot 
> /dev/sda2   gpt004  ext3 50/ (root), /bin, /dev, /etc,
> /initrd, /lib, sbin
> 
> ==> linux rescue boot: /dev/sda1   gpt005  ext2???/boot
>  /dev/sda2   gpt006  ext3 50/ (root), /bin, /dev, /etc,
> /initrd, /lib, sbin 
> -
>
> 
150 GB + 3X boot
> 
> ==> linux temporary: /dev/sda3   gpt103  ext4 64
> (swap1) /dev/sda4   gpt104  ext4 64(swap2) /dev/sda5
> gpt105  ext4 64(swap3) /dev/sda6   gpt106  ext4
> 200/tmp - 392
> GB
> 
> ==> linux permanent: /dev/sda7   gpt207  ext4100/var 
> /dev/sda8   gpt208  ext4100/usr
> 
> ==> linux user permanent: /dev/sda9   gpt309  ext4100
> /usr/local /dev/sda10  gpt310  ext4100/home 
>  400 GB
> 
> 
> ==> *** USER-DEFINED ***
> 
> /dev/sda51  gpt551..557 ext4 75X7  /!_d ... /!_j (current,
> clone winxp partition structure, allow for growth) /dev/sda52
> gpt599  ext4 70/!~dell (WinXP archive C..J: ../winxp_c
> .. ../winxp_j (WinXP archive C..J)) 
> -
>
> 
595 GB
> 
> ==> other operating systems: /dev/sda61  gpt661  ext4???
> /._win7   virtualized windows 7 /dev/sda62  gpt662  ext4???
> /._win7_1 /dev/sda63  gpt663  ext4???/._dfly
> virtualized dragonflybsd /dev/sda64  gpt664  ext4???
> /._dfly_1 
> 
>
> 
??? GB
> 
> ==> ssd partitions: /dev/sda71  gpt771  ext4 --/.~ssd01
> (ssd partition) | total ssd /dev/sda72  gpt772  ext4 --
> /.~ssd02   (ssd partition) | capacity /dev/sda73  gpt773  ext4
> --/.~ssd03   (ssd partition) |  256 GB -   -
>    -   --- N/A
> GB
> 
> ==> allocated:1537 GB (+ 3x boot) ==>
> unallocated: + 463 GB (- 3x boot) -
> -      -
> --- ==> Total HDD
> Capacity:   2000 GB Note 1: Please be aware that I am a
> complete Linux newbie but with a software engineering background.
> 
> Note 2: Hoping to be able to swap back and forth between an
> "active" version of Linux and the "next" version of Linux, by
> switching the roles of partitions   (gpt001, gpt002) <==>
> (gpt003,gpt004) .
> 
> Note 3: Please be aware that I intend to maintain most of my
> user-related content in the seven (7) partitions gpt551..gpt557
> 
> * * * * * *
> 
> Comments, criticisms, questions welcome.
> 
> Best Regards,
> 
> Steve
> 
> 
> 
> --- Talk Mailing List talk@gtalug.org 
> https://gtalug.org/mailman/listinfo/talk
> 

- -- 
Bob Jonkman   Phone: +1-519-635-9413
SOBAC Microcomputer Services http://sobac.com/sobac/
Software   ---   Office & Business Automation   ---   Consulting
GnuPG Fngrprnt:04F7 742B 8F54 C40A E115 26C2 B912 89B0 D2CC E5EA



-BEGIN PGP SIGNATURE-
Version: GnuPG v2

Re: [GTALUG] New Desktop PC -- debian Linux - Proposed 2 TB HDD Partitioning;

2018-04-11 Thread Lennart Sorensen via talk
On Wed, Apr 11, 2018 at 09:58:05AM -0400, Steve Petrie, P.Eng. via talk wrote:
> Warm Greetings To GTALUG Members,
> 
> This coming weekend (Friday 13 April 2018) I will be building my new desktop 
> PC with the help of my friend who has built quite a few PCs for his employer.
> 
> The new PC will be running debian Linux and will soon take over duties from 
> an ancient Dell desktop PC running Windows XP (acquired new in March 2005)..
> 
> I will post the final hardware configuration on PCPartPicker once the new PC 
> is operational.
> 
> * * *
> * * *
> 
> Meanwhile, I would like to ask GTALUG members please to itake a look at the 
> partitioning configuraiton I am proposing for the 2 TB Western Digital HDD 
> (best to stretch your email client window to defeat word wrap):
>   ==> *** STANDARD LINUX ***
> 
>  /device partition
> 
>   ==> linux normal boot #1: (current active version of linux os, will be 
> recycled for next version)
>  /dev/sda1   gpt001  ext2???/boot
>  /dev/sda2   gpt002  ext3 50/ (root), /bin, /dev, /etc, 
> /initrd, /lib, sbin
> 
>   ==> linux normal boot #2: (next version of linux os, will become current 
> version)
>  /dev/sda1   gpt003  ext2???/boot
>  /dev/sda2   gpt004  ext3 50/ (root), /bin, /dev, /etc, 
> /initrd, /lib, sbin
> 
>   ==> linux rescue boot:
>  /dev/sda1   gpt005  ext2???/boot 
>  /dev/sda2   gpt006  ext3 50/ (root), /bin, /dev, /etc, 
> /initrd, /lib, sbin
>   
> -
>   150 GB + 3X boot
> 
>   ==> linux temporary:
>  /dev/sda3   gpt103  ext4 64(swap1)
>  /dev/sda4   gpt104  ext4 64(swap2)
>  /dev/sda5   gpt105  ext4 64(swap3)
>  /dev/sda6   gpt106  ext4200/tmp
>   -
>   392 GB
> 
>   ==> linux permanent:
>  /dev/sda7   gpt207  ext4100/var
>  /dev/sda8   gpt208  ext4100/usr
> 
>   ==> linux user permanent:
>  /dev/sda9   gpt309  ext4100/usr/local
>  /dev/sda10  gpt310  ext4100/home
>   
>   400 GB
> 
> 
>   ==> *** USER-DEFINED ***
> 
>  /dev/sda51  gpt551..557 ext4 75X7  /!_d ... /!_j (current, clone 
> winxp partition structure, allow for growth)
>  /dev/sda52  gpt599  ext4 70/!~dell (WinXP archive C..J: 
> ../winxp_c .. ../winxp_j (WinXP archive C..J))
>   
> -
>   595 GB
> 
>   ==> other operating systems:
>  /dev/sda61  gpt661  ext4???/._win7   virtualized windows 7
>  /dev/sda62  gpt662  ext4???/._win7_1
>  /dev/sda63  gpt663  ext4???/._dfly   virtualized dragonflybsd
>  /dev/sda64  gpt664  ext4???/._dfly_1
>   
>   ??? GB
> 
>   ==> ssd partitions:
>  /dev/sda71  gpt771  ext4 --/.~ssd01   (ssd partition) | 
> total ssd
>  /dev/sda72  gpt772  ext4 --/.~ssd02   (ssd partition) | 
> capacity
>  /dev/sda73  gpt773  ext4 --/.~ssd03   (ssd partition) |  256 
> GB
>  -   -      -   
> ---
>   N/A GB
> 
>   ==> allocated:1537 GB (+ 3x boot)
>   ==> unallocated: + 463 GB (- 3x boot)
>  -   -      -   
> ---
>   ==> Total HDD Capacity:   2000 GB
> Note 1: Please be aware that I am a complete Linux newbie but with a software 
> engineering background.
> 
> Note 2: Hoping to be able to swap back and forth between an "active" version 
> of Linux and the "next" version of Linux, by switching the roles of 
> partitions   (gpt001, gpt002) <==> (gpt003,gpt004) .
> 
> Note 3: Please be aware that I intend to maintain most of my user-related 
> content in the seven (7) partitions gpt551..gpt557
> 
> * * *
> * * *
> 
> Comments, criticisms, questions welcome.

Do you actually work on your computer or do you spend all day shuffling
bits of old OSs around?

Where is the UEFI boot partition?

I would never waste time or space on a rescue boot.  I have USB keys
for that.

I keep one OS linux installed and maintained.  I have never had a problem
upgrading that needed a reinstall.  My Debian 2.0 install lasted until
486 support was dropped from Debian.  I forget what version that
eventually was.  I keep one windows install.  I can't imagine a benefit
of doing anything more complex and can think of a ton of reasons not to
have more.

As for virtualized, disk images are simpler and lets you thrown them
all one one partition.  Sure raw partitions can have slight