[zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread F. Wessels
Hi, can anybody describe the correct procedure to replace a disk (in a working OK state) with a another disk without degrading my pool? For a mirror I thought off adding the spare, you'll get a three device mirror. Let it resilver. Finally remove the disk I want. But what would be the correct

[zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-04 Thread F. Wessels
Thanks for the replies. I guess I misunderstood the manual: zpool replace [-f] pool old_device [new_device] Replaces old_device with new_device. This is equivalent to attaching new_device, waiting for it to resilver, and then detaching old_device. The size of new_device must be greater

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-20 Thread F. Wessels
Also in reply to the previous email by Will. Can anyone shed more light on the combination lsi sas hba , the lsisasx36 expander chip (or it's relatives) and sata disks. I'm investigating a migration from discrete channels (like in the thumper) to a multiplexed solution via a sas expander. I'm

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-21 Thread F. Wessels
So to wrap it up. According to Will, a supermicro chassis using a single lsi expander connected to sata disks can utilize the wide sas port between hba and the chassis. (like a J4500 Richard mentioned. How much I like these systems (thumper etc), they're way out of my budget.) Will did see more

Re: [zfs-discuss] SSD's and ZFS...

2009-07-23 Thread F. Wessels
Thanks posting this solution. But I would like to point out that bug 6574286 removing a slog doesn't work still isn't resolved. A solution is under it's way, according to George Wilson. But in the mean time, IF something happens you might be in a lot of trouble. Even without some unfortunate

Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-07-23 Thread F. Wessels
Hi, I'm using asus m3a78 boards (with the sb700) for opensolaris and m2a* boards (with the sb600) for linux some of them with 4*1GB and others with 4*2Gb ECC memory. Ecc faults will be detected and reported. I tested it with a small tungsten light. By moving the light source slowly towards the

Re: [zfs-discuss] SSD's and ZFS...

2009-07-23 Thread F. Wessels
I didn't meant using slog for the root pool. I meant using the slog for a data pool. Where the data pool consists of (rotating) hard disk and complement them with a ssd based slog. But instead of a dedicated ssd for the slog I want the root pool share the ssd with the slog. Both can mirrored to

[zfs-discuss] resolve zfs properties default to actual value

2009-10-27 Thread F. Wessels
Hi, how can I find out what the actual value when the default applies to a zfs property? # zfs get checksum mpool NAME PROPERTY VALUE SOURCE mpool checksum on default (In this particular case I know what the value is, either fletcher2 or fletcher4 depending on the build) But how can one find

Re: [zfs-discuss] resolve zfs properties default to actual value

2009-10-27 Thread F. Wessels
Thank you for the reply. I must admit that upon closer inspection alot of properties indeed do present the actual value. For the checksum property I used zdb - | grep fletcher to determine wether it was fletcher2 or fletcher4 was used for checksumming the filesystem. Using the OS build

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-26 Thread F. Wessels
After following this topic the last days, and nearly everybody contributed to it, I think it's time to add a new factor. Vibration. First some prove how sensitive modern drives are: http://blogs.sun.com/brendan/entry/unusual_disk_latency Most enterprise drives also contain circuitry to handle

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-26 Thread F. Wessels
@Bob, yes you're completely right. This kind of engineering is what you get when buying a 2540 for example. All parts are nicely matched. When you build your own whitebox the parts might not match. But that wasn't my point. Vibration, in the drive and excited by the drive, increases with the

[zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-29 Thread F. Wessels
Hi, as Richard Elling wrote earlier: For more background, low-cost SSDs intended for the boot market are perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root and the rest for an L2ARC. For small form factor machines or machines with max capacity of 8GB of RAM (a typical home system)

Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-30 Thread F. Wessels
Thanks for the reply. I didn't get very much further. Yes, ZFS loves raw devices. When I had two devices I wouldn't be in this mess. I would simply install opensolaris on the first disk and add the second ssd to the data pool with a zpool add mpool cache cxtydz Notice that no slices or

Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-30 Thread F. Wessels
Thank you Erik for the reply. I misunderstood Dan's suggestion about the zvol in the first place. Now you make the same suggestion also. Doesn't zfs prefer raw devices? When following this route the zvol used as cache device for tank makes use of the ARC of rpool what doesn't seem right. Or is

Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-30 Thread F. Wessels
Thank you Darren. So no zvol's as L2ARC cache device. That leaves partitions and slices. When I tried to add a second partition, the first contained slices with the root pool, as cache device. Zpool refused, it reported that the device CxTyDzP2 (note P2) wasn't supported. Perhaps I did

Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-30 Thread F. Wessels
Hi all, yes it works with the partitions. I think that I made a typo during the initial testing off adding a partition as cache, probably swapped the 0 for an o. Tested with a b134 gui and text installer on the x86 platform. So here it goes: Install opensolaris into a partition and leave some

[zfs-discuss] backup pool

2010-04-09 Thread F. Wessels
Hi all, I want to backup a pool called mpool. I want to do this by doing a zfs send of a mpool snapshot and receive into a different pool called bpool. All this on the same machine. I'm sharing various filesystems via zfs sharenfs and sharesmb. Sending and receiving of the entire pool works as

Re: [zfs-discuss] OCZ Devena line of enterprise SSD

2010-06-17 Thread F. Wessels
OCZ not only introduced these enterprise SSDs but also Maximum Performance/Enterprise Solid State Drives a couple of days ago. An SLC version: Vertex 2 EX

Re: [zfs-discuss] OCZ Devena line of enterprise SSD

2010-06-17 Thread F. Wessels
I just lookup it up again and as far as i can see the super cap is present in the MLC version as well as the SLC http://www.ocztechnology.com/products/solid-state-drives/2-5--sata-ii/maximum-performance-enterprise-solid-state-drives/ocz-vertex-2-pro-series-sata-ii-2-5--ssd-.html From the page:

Re: [zfs-discuss] ZFS and VMware

2010-08-13 Thread F. Wessels
I fully agree with your post. NFS is much simpler in administration. Although I don't have any experience with the DDRdrive X1, I've read and heard from various people actually using them that it's the best available SLOG device. Before everybody starts yelling ZEUS or LOGZILLA. Was anybody

Re: [zfs-discuss] ZFS and VMware

2010-08-13 Thread F. Wessels
Yes, the sandforce based ssd's are also interesting. I think both, the 1500 sure can, could be fitted with the necessary supercap to prevent dataloss in case of unexpected power loss. And the 1500 based models will available with a SAS interface needed for clustering. Something the DDRdrive

Re: [zfs-discuss] ZFS and VMware

2010-08-13 Thread F. Wessels
I wasn't planning to buy any SSD as a ZIL. I merely acknowledged that an sandforce with supercap MIGHT be a solution. At least the supercap should take care of the data loss in case of a power failure. But they are still in the consumer realm have not been picked up by the enterprise (yet) for

Re: [zfs-discuss] (preview) Whitepaper - ZFS Pools Explained - feedback welcome

2010-08-25 Thread F. Wessels
Although it's bit much Nexenta oriented, command wise. It's a nice introduction. I did found one thing, page 28 about the zil. There's no zil device, the zil can be written to an optional slog device. And the last line first paragraph, If you can, use memory based SSD devices. At least change