Re: [zfs-discuss] Recovering an array on Mac

2008-07-11 Thread Lee Fyock

So, does anybody have an approach to recovering this filesystem?

Is there a way to relabel the drives so that ZFS will recognize them,  
without losing the data?


Thanks,
Lee

On Jul 5, 2008, at 1:24 PM, Lee Fyock wrote:


Hi--

Here's the scoop, in probably too much detail:

I'm a sucker for new filesystems and new tech in general. For you  
old-time Mac people, I installed Sequoia when it was first seeded,  
and had to reformat my drive several times as it grew to the final  
release. I flipped the journaled flag before I even knew what it  
meant. I installed the pre-Leopard ZFS seed and have been using it  
for, what, a year?


So, I started with two 500 GB drives in a single pool, not mirrored.  
I bought a 1 TB drive and added it to the pool. I bought another 1  
TB drive, and finally had enough storage (~1.5 TB) to mirror my  
disks and be all set for the foreseeable future.


In order to migrate my data from a single pool of 500 GB + 500 GB +  
1 TB to a mirrored 500GB/500GB + 1TB/1TB pool, I was planning on  
doing this:


1) Copy everything to the New 1 TB drive (slopping what wouldn't fit  
onto another spare drive)

2) Upgrade to the latest ZFS for Mac release (117)
3) Destroy the existing pool
4) Create a pool with the two 500 GB drives
5) Copy everything from the New drive to the 500 GB x 2 pool
6) Create a mirrored pool with the two 1 TB drives
7) Copy everything from the 500 GB x 2 pool to the mirrored 1 TB pool
8) Destroy the 500 GB x 2 pool, and create it as a 500GB/500GB  
mirrored pair and add it to the 1TB/1TB pool


During step 7, while I was at work, the power failed at home,  
apparently long enough to drain my UPS.


When I rebooted my machine, both pools refused to mount: the 500+500  
pool and the 1TB/1TB mirrored pool. Just about all my data is lost.  
This was my media server containing my DVD rips, so everything is  
recoverable in that I can re-rip 1+TB, but I'd rather not.


diskutil list says this:
/dev/disk1
   #:   TYPE NAMESIZE
IDENTIFIER
   0: FDisk_partition_scheme*465.8 Gi
disk1
   1:465.8 Gi
disk1s1

/dev/disk2
   #:   TYPE NAMESIZE
IDENTIFIER
   0: FDisk_partition_scheme*465.8 Gi
disk2
   1:465.8 Gi
disk2s1

/dev/disk3
   #:   TYPE NAMESIZE
IDENTIFIER
   0: FDisk_partition_scheme*931.5 Gi
disk3
   1:931.5 Gi
disk3s1

/dev/disk4
   #:   TYPE NAMESIZE
IDENTIFIER
   0: FDisk_partition_scheme*931.5 Gi
disk4
   1:931.5 Gi
disk4s1


During step 2, I created the pools using zpool create media mirror / 
dev/disk3 /dev/disk4 then zpool upgrade, since I got warnings  
that the filesystem version was out of date. Note that I created  
zpools referring to the entire disk, not just a slice. I had  
labelled the disks using

diskutil partitiondisk /dev/disk2 GPTFormat ZFS %noformat% 100%
but now the disks indicate that they're FDisk_partition_scheme.

Googling for FDisk_partition_scheme yields http://lists.macosforge.org/pipermail/zfs-discuss/2008-March/000240.html 
, among other things, but no hint of where to go from here.


zpool import -D reports no pools available to import.

All of this is on a Mac Mini running Mac OS X 10.5.3, BTW. I own  
Parallels if using an OpenSolaris build would be of use.


So, is the data recoverable?

Thanks!
Lee

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recovering an array on Mac

2008-07-05 Thread Lee Fyock

Hi--

Here's the scoop, in probably too much detail:

I'm a sucker for new filesystems and new tech in general. For you old- 
time Mac people, I installed Sequoia when it was first seeded, and had  
to reformat my drive several times as it grew to the final release. I  
flipped the journaled flag before I even knew what it meant. I  
installed the pre-Leopard ZFS seed and have been using it for, what, a  
year?


So, I started with two 500 GB drives in a single pool, not mirrored. I  
bought a 1 TB drive and added it to the pool. I bought another 1 TB  
drive, and finally had enough storage (~1.5 TB) to mirror my disks and  
be all set for the foreseeable future.


In order to migrate my data from a single pool of 500 GB + 500 GB + 1  
TB to a mirrored 500GB/500GB + 1TB/1TB pool, I was planning on doing  
this:


1) Copy everything to the New 1 TB drive (slopping what wouldn't fit  
onto another spare drive)

2) Upgrade to the latest ZFS for Mac release (117)
3) Destroy the existing pool
4) Create a pool with the two 500 GB drives
5) Copy everything from the New drive to the 500 GB x 2 pool
6) Create a mirrored pool with the two 1 TB drives
7) Copy everything from the 500 GB x 2 pool to the mirrored 1 TB pool
8) Destroy the 500 GB x 2 pool, and create it as a 500GB/500GB  
mirrored pair and add it to the 1TB/1TB pool


During step 7, while I was at work, the power failed at home,  
apparently long enough to drain my UPS.


When I rebooted my machine, both pools refused to mount: the 500+500  
pool and the 1TB/1TB mirrored pool. Just about all my data is lost.  
This was my media server containing my DVD rips, so everything is  
recoverable in that I can re-rip 1+TB, but I'd rather not.


diskutil list says this:
/dev/disk1
   #:   TYPE NAMESIZE
IDENTIFIER
   0: FDisk_partition_scheme*465.8 Gi
disk1
   1:465.8 Gi
disk1s1

/dev/disk2
   #:   TYPE NAMESIZE
IDENTIFIER
   0: FDisk_partition_scheme*465.8 Gi
disk2
   1:465.8 Gi
disk2s1

/dev/disk3
   #:   TYPE NAMESIZE
IDENTIFIER
   0: FDisk_partition_scheme*931.5 Gi
disk3
   1:931.5 Gi
disk3s1

/dev/disk4
   #:   TYPE NAMESIZE
IDENTIFIER
   0: FDisk_partition_scheme*931.5 Gi
disk4
   1:931.5 Gi
disk4s1


During step 2, I created the pools using zpool create media mirror / 
dev/disk3 /dev/disk4 then zpool upgrade, since I got warnings that  
the filesystem version was out of date. Note that I created zpools  
referring to the entire disk, not just a slice. I had labelled the  
disks using

diskutil partitiondisk /dev/disk2 GPTFormat ZFS %noformat% 100%
but now the disks indicate that they're FDisk_partition_scheme.

Googling for FDisk_partition_scheme yields http://lists.macosforge.org/pipermail/zfs-discuss/2008-March/000240.html 
, among other things, but no hint of where to go from here.


zpool import -D reports no pools available to import.

All of this is on a Mac Mini running Mac OS X 10.5.3, BTW. I own  
Parallels if using an OpenSolaris build would be of use.


So, is the data recoverable?

Thanks!
Lee

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X Leopard to use ZFS

2007-06-07 Thread Lee Fyock

Thanks, Chad.

There's some debate in the Mac community about what the phrase the  
file system in Mac OS X means. Does that mean that machines that  
ship with Leopard will run on ZFS discs by default? Will ZFS be the  
default file system when initializing a new drive?


IMHO, that seems unlikely, given that zfs boot is still an unreleased  
feature. I'd be happy to be proven wrong, though.


If there's anyone in the know, please feel free to speak up. :-)

Thanks,
Lee

On Jun 7, 2007, at 2:58 PM, Chad Leigh -- Shire.Net LLC wrote:



On Jun 7, 2007, at 12:50 PM, Rick Mann wrote:


From Macintouch (http://macintouch.com/#other.2007.06.07):


---
On stage Wednesday in Washington D.C., Sun Microsystems Inc. CEO  
Jonathan Schwartz revealed that his company's open-source ZFS file  
system will replace Apple's long-used HFS+ in Mac OS X 10.5,  
a.k.a. Leopard, when the new operating system ships this fall.  
This week, you'll see that Apple is announcing at their Worldwide  
Developers Conference that ZFS has become the file system in Mac  
OS X, said Schwartz.
  ZFS (Zettabyte File System), designed by Sun for its Solaris OS  
but licensed as open-source, is a 128-bit file storage system that  
features, among other things, pooled storage, which means that  
users simply plug in additional drives to add space, without  
worrying about such traditional storage parameters as volumes or  
partitions.
  [ZFS] eliminates volume management, it has extremely high  
performance It permits the failure of disk drives, crowed  
Schwartz during a presentation focused on Sun's new blade servers.

---



We'll see next week what Steve announces at the WWDC keynote (which  
is not under NDA like the rest of the conference).  I'll be there  
and try to remember to post what is said (though it will probably  
be in a billion other places as well)


Chad


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Motley group of discs?

2007-05-07 Thread Lee Fyock

Cindy,

Thanks so much for the response -- this is the first one that I  
consider an actual answer. :-)


I'm still unclear on exactly what I end up with. I apologize in  
advance for my ignorance -- the ZFS admin guide assumes knowledge  
that I don't yet have.


I assume that disk4 is a hot spare, so if one of the other disks die,  
it'll kick into active use. Is data immediately replicated from the  
other surviving disks to disk4?


What usable capacity do I end up with? 160 GB (the smallest disk) *  
3? Or less, because raidz has parity overhead? Or more, because that  
overhead can be stored on the larger disks?


If I didn't need a hot spare, but instead could live with running out  
and buying a new drive to add on as soon as one fails, what  
configuration would I use then?


Thanks!
Lee

On May 7, 2007, at 2:44 PM, [EMAIL PROTECTED] wrote:


Hi Lee,

You can decide whether you want to use ZFS for a root file system now.
You can find this info here:

http://opensolaris.org/os/community/zfs/boot/

Consider this setup for your other disks, which are:

250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive

250GB = disk1
200GB = disk2
160GB = disk3
600GB = disk4 (spare)

I include a spare in this setup because you want to be protected  
from a disk failure. Since the replacement disk must be equal to or  
larger than

the disk to replace, I think this is best (safest) solution.

zpool create pool raidz disk1 disk2 disk3 spare disk4

This setup provides less capacity but better safety, which is probably
important for older disks. Because of the spare disk requirement (must
be equal to or larger in size), I don't see a better arrangement. I
hope someone else can provide one.

Your questions remind me that I need to provide add'l information  
about

the current ZFS spare feature...

Thanks,

Cindy



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Motley group of discs?

2007-05-04 Thread Lee Fyock

Hi--

I'm looking forward to using zfs on my Mac at some point. My desktop  
server (a dual-1.25GHz G4) has a motley collection of discs that has  
accreted over the years: internal EIDE 320GB (boot drive), internal  
250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive.


My guess is that I won't be able to use zfs on the boot 320 GB drive,  
at least this year. I'd like to favor available space over  
performance, and be able to swap out a failed drive without losing  
any data.


So, what's the best zfs configuration in this situation? The FAQs  
I've read are usually related to matched (in size) drives.


Thanks!
Lee

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Motley group of discs?

2007-05-04 Thread Lee Fyock

I didn't mean to kick up a fuss.

I'm reasonably zfs-savvy in that I've been reading about it for a  
year or more. I'm a Mac developer and general geek; I'm excited about  
zfs because it's new and cool.


At some point I'll replace my old desktop machine with something new  
and better -- probably when Unreal Tournament 2007 arrives,  
necessitating a faster processor and better graphics card. :-)


In the mean time, I'd like to hang out with the system and drives I  
have. As mike said, my understanding is that zfs would provide  
error correction until a disc fails, if the setup is properly done.  
That's the setup for which I'm requesting a recommendation.


I won't even be able to use zfs until Leopard arrives in October, but  
I want to bone up so I'll be ready when it does.


Money isn't an issue here, but neither is creating an optimal zfs  
system. I'm curious what the right zfs configuration is for the  
system I have.


Thanks!
Lee

On May 4, 2007, at 7:41 PM, Al Hopper wrote:


On Fri, 4 May 2007, mike wrote:


Isn't the benefit of ZFS that it will allow you to use even the most
unreliable risks and be able to inform you when they are  
attempting to

corrupt your data?


Yes - I won't argue that ZFS can be applied exactly as you state  
above.

However, ZFS is no substitute for bad practices that include:

- not proactively replacing mechanical components *before* they fail
- not having maintenance policies in place

To me it sounds like he is a SOHO user; may not have a lot of  
funds to

go out and swap hardware on a whim like a company might.


You may be right - but you're simply guessing.  The original system
probably cost around $3k (?? I could be wrong).  So what I'm  
suggesting,

that he spend ~ $300, represents ~ 10% of the original system cost.

Since the OP asked for advice, I've given him the best advice I can  
come
up with.  I've also encountered many users who don't keep up to  
date with

current computer hardware capabilities and pricing, and who may be
completely unaware that you can purchase two 500Gb disk drives,  
with a 5
year warranty, for around $300.  And possibly less if you checkout  
Frys

weekly bargin disk drive offers.

Now consider the total cost of ownership solution I recommended: 500
gigabytes of storage, coupled with ZFS, which translates into $60/ 
year for
5 years of error free storage capability.  Can life get any better  
than

this! :)

Now contrast my recommendation with what you propose - re-targeting a
bunch of older disk drives, which incorporate older, less reliable
technology, with a view to saving money.  How much is your time worth?
How many hours will it take you to recover from a failure of one of  
these

older drives and the accompying increased risk of data loss.

If the ZFS savvy OP comes back to this list and says Als' solution  
is too
expensive I'm perfectly willing to rethink my recommendation.  For  
now, I

believe it to be the best recommendation I can devise.


ZFS in my opinion is well-suited for those without access to
continuously upgraded hardware and expensive fault-tolerant
hardware-based solutions. It is ideal for home installations where
people think their data is safe until the disk completely dies. I
don't know how many non-savvy people I have helped over the years who
has no data protection, and ZFS could offer them at least some
fault-tolerance and protection against corruption, and could help
notify them when it is time to shut off their computer and call
someone to come swap out their disk and move their data to a fresh
drive before it's completely failed...


Agreed.

One piece-of-the-puzzle that's missing right now IMHO, is a reliable,
two port, low-cost PCI SATA disk controller.  A solid/de-bugged 3124
driver would go a long way to ZFS-enabling a bunch of cost- 
constrained ZFS

users.

And, while I'm working this hardware wish list, please ... a PCI- 
Express

based version of the SuperMicro AOC-SAT2-MV8 8-port Marvell based disk
controller card.  Sun ... are you listening?



- mike


On 5/4/07, Al Hopper [EMAIL PROTECTED] wrote:

On Fri, 4 May 2007, Lee Fyock wrote:


Hi--

I'm looking forward to using zfs on my Mac at some point. My  
desktop
server (a dual-1.25GHz G4) has a motley collection of discs that  
has

accreted over the years: internal EIDE 320GB (boot drive), internal
250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive.

My guess is that I won't be able to use zfs on the boot 320 GB  
drive,

at least this year. I'd like to favor available space over
performance, and be able to swap out a failed drive without losing
any data.

So, what's the best zfs configuration in this situation? The FAQs
I've read are usually related to matched (in size) drives.


Seriously, the best solution here is to discard any drive that is  
3 years
(or more) old[1] and purchase two new SATA 500Gb drives.  Setup  
the new
drives as a zfs mirror.  Being a believer in diversity, I'd  
recommend