On Sat, 2018-08-04 at 13:03 -0400, Art wrote: > Encryption we have today, is just the home folder. If encryption of the > entire drive is used, it doesn't take a rocket scientist to know that > the performance slows severely, that more ram is needed and that more > SSD read/write cycles will be needed. This increases the wear and tear > on SSD's, which have a finite lifetime AND more system files need to be > in ram to help avoid excessive delays.
Hi, sorry, I couldn't resist to cross-post. A few days ago I mounted the third SSD. Take a look at the "cable clutter license" picture at https://i.imgur.com/DxcfbSz.png , it does show my PC's old and cheap case. If HDDs are mounted to this case, you not only hear the even silent HDDs, sometimes also the sidewalls of the case start vibrating and make a very annoying noise. I decided to treat my SSDs in the same way, I treated HDDs. There are two main differences: - When I used HDDs my PC usually was up 24/7, since I use SSDs, I tend to shutdown and to startup the PC each day. - The SSDs gets trimmed once a week. The used SSDs are more or less the cheapest available SSDs. The vendor's Linux tool is very good, running gksudo ocz-ssd-utility & it's easy to update the firmware, to get some special information and to get much better SMART output, than provided by smartctl. Checking that everything regarding the trimming is ok could be done by systemctl status fstrim.timer systemctl status fstrim.service journalctl -u fstrim.service A 1,5 years old TOSHIBA TL100 223.57 GiB did cost 77,95 €. A 1 year old TOSHIBA TL100 223.57 GiB did cost 86,50 €. A brand spanking new TOSHIBA TR200 223.57 GiB does cost 49,99 €. A guess regarding the expected lifespan, based upon the health status reported by the ocz-ssd-utility. For the brand spanking new it's: 100% For the 1 year old it's: 88% For the 1,5 years old it's: 65% The expected lifespan might be around >= 3 years. The lifespan of my last HDDs for the same kind of usage was around >= 7 years. An unimportant drawback is usage with "vintage" installs. My everyday Linux is an Arch Linux install. I'm using syslinux, not grub, so to avoid chainloading, the Arch Linux install contains the kernels of all other installs, too. Apart from Arch Linux, lets take a look at two other Linux installs. The current up-to-date Ubuntu (server image) 16.04.5 LTS install and the "vintage" Ubuntu 12.10 (Xubuntu or Ubuntu Studio) install. All three installs support ext4 and are installed on ext4 partitions. Arch Linux was "cp -ai from_the_oldest_TL100 to_the_new_TR200", I removed the boot flag from the original Arch Linux partition and added it to the new Arch Linux copy and run "extlinux --install ...", to install the bootloader. The two Ubuntu installs remain on a TL100 drive. [rocketmouse@archlinux ~]$ grep PRETTY /mnt/moonstudio/etc/os-release PRETTY_NAME="Ubuntu 16.04.5 LTS" [rocketmouse@archlinux ~]$ grep archlinux /mnt/moonstudio/etc/fstab #dev/sda9 /mnt/archlinux ext4 defaults,relatime 0 2 /dev/sdc1 /mnt/archlinux ext4 defaults,relatime 0 2 /mnt/archlinux/.boot/ubuntu_moonstudio/boot /boot none bind 0 0 Remember, the Ubuntu kernels are on the Arch Linux partition. To mount Arch Linux and to bind to /boot makes sense, to be able to upgrade a kernel. This works without issues for Ubuntu 16.04.5 LTS and IIRC it also worked without issues for the "vintage" Ubuntu 12.10 (EOL May 16, 2014), when the Arch Linux install was on the oldest TL100. After the migration of the Arch Linux install to the new SSD it still works for Ubuntu 16.04.5 LTS, but fails for the "vintage" Ubuntu 12.10. I get a message that there should be a serious damage on the Arch Linux file system, that can't be repaired, a newer version of ext4 is required or fsck.ext4 or whatsoever. I don't care, I simply commented mounting archlinux and to bind to /boot out in the "vintage" Ubuntu 12.10's fstab. Résumé If the SSDs will last for >= 3 years it's ok for me. Since I didn't do any troubleshooting regarding the issue with the "vintage" install, you could assume that I don't care much about it. Even if it's said that due to a missing cache those SSDs should be more or less the slowest SSDs on the market they are unbelievable fast, absolutely fast enough regarding my needs. To claim that they are slow is like claiming that a 290 km/h fast car is disgusting slow, since all other cars of this kind are usually 355 km/h fast. The fuel consumption might be much more important. All SSDs need less current, than a HDD do, this could make a big difference, especially for portable computers. I don't know if a so called "slow" SSD in the end needs less current, than a "fast" SSD, but I don't care, if a SSD is fast or faster, since I unlikely would notice the difference. Since I have long time experiences using HHDs I notice first signs if a HDD starts to reach end of life. I don't know if there are any signs at all, before a SSD dies. If a HDD is dead and a backup should miss a little bit of data, it's usually possible to revitalize the HDD to rescue the data. Would this be possible for a dead SSD, too? What are your opinions, experiences and thoughts? Regards, Ralf -- xubuntu-users mailing list [email protected] Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/xubuntu-users
