I wish you luck in your recovery adventures. I suspect that you've made it harder for yourself than you perhaps intended.
I have put considerable thought, and effort, into my computing architecture. All the work was in setup, in day-to-day use I don't have to think about the complexity. Here's the TL;DR version. SSDs fail, catastrophically. They usually don't give you any advance warning either, and the data, once lost, is GONE. (Spinning drives can sometimes be coerced into giving you one last copy of data, as they fail.) You MUST be prepared for eventual, inevitable failure, regardless of medium. If you are not, you are being a fool. See: http://formicapeak.com/~jimc/flash.html <http://formicapeak.com/~jimc/flash.html> and remember that currently all SSDs are built out of flash memory. Reality can be harsh, which sucks. Deal with it. My main system, this one, is a Mac Pro from 2009. It has four internal SATA drive bays. I use a 120GB SSD to hold the OS and applications. It boots very fast as a result of this SSD. If/when it craps, everything there can be regenerated. In other words, there are no critical data located on the SSD. (It has 12 3GHz Xeon cores, 24 hyperthreads, and 48GB of RAM. It never even breathes hard no matter what I throw at it. It is a magnificent beast. I have a spare, if it should fail.) Though fairly unusual these days, it IS possible to partition the entire system across multiple physical drives. Macs are Unix-based, and Unix systems have always supported such partitioning. I think Windows can do something similar, but I'm not familiar with it enough to say for sure. For sure Linux can do this partitioning too. My home directory (under which, as is typical for all Unix systems, ALL personal data is located) is on 2TB spinning media. It's slower, but is not subject to degradation with use, unlike SSDs. The slower speed is essentially unnoticeable, as very few applications spend any significant time reading/writing to data files. (Not when compared to swapping in and out chunks of the applications themselves.) There are exceptions, of course. ALL of this system is backed up via Time Machine. Besides its utility in recovering past versions of files, it is primarily there to provide catastrophic recovery. If a drive fails, either SSD or spinning, a replacement can be jacked into place and then a full restore can be performed to the replaced device. It takes time, but the failure is as if it never happened. The most that is at risk is the last hour's work, because that's Time Machine's basic cycle time. My Time Machine server is using a 4-drive RAID 0, with only two of the drives in-server at any given time. The other two are rotated into service periodically, and serve as off-line (and off-site) backup, for protection against fire and theft, etc. My data is never all in one place at one time. Ever. It would take something like an asteroid strike to get it all, in which case I probably wouldn't be in a position to care anyway. Because the bulk of my working day is spent in Linux and Windows virtual machines, via VMware, the virtual machine file store is NOT part of Time Machine's purview. (This is because the VM files are not file-by-file, but device-by-device, which means that these few large files would be continually churning into Time Machine's store, ruining its utility.) I have to be cognizant of that, and ensure that my data within the VMs is protected via some means other than Time Machine. Due to the nature of my work, I'm using Subversion source-code-control tools for this. This is weaker than I'd like, and I suppose I really should find some sort of Time Machine-like tool that runs inside Linux. (I only do final testing under Windows, all development and most testing is done under Linux. I'm using a multi-platform Lazarus IDE.) But so far I haven't been sorry that I still have that level of exposure to loss. If I get particularly nervous I use rsync to save away copies of involved work to other places from which I could recover at need. The VM store is on 2TB rotating media, not SSD. (Third drive bay.) This IS definitely slower than optimal, but I am willing to tolerate this because I don't save/restore VMs all that often, and I cannot tolerate sudden surprise failure of the VM store. On rotating media there is no degradation, and failures usually start showing up as retries and other things that can alert me to problems before they become catastrophic. When an SSD fails, it's usually suddenly, without warning, and final. No, thank you. Recently I bought another external drive for my laptop, which is my backup computing environment. I deliberately selected a spinning 2TB drive, because it's primarily for the VM store. Yes, it's more delicate and slower than an SSD. I don't care, I simply cannot tolerate flash memory's inherent failure characteristics there. -- Jim _______________________________________ http://www.okiebenz.com To search list archives http://www.okiebenz.com/archive/ To Unsubscribe or change delivery options go to: http://mail.okiebenz.com/mailman/listinfo/mercedes_okiebenz.com