[zfs-discuss] perl modules to access zfs commands?

2008-01-31 Thread Jan Dreyer
Hi, this may be a perl question more than a zfs question, but anyway: are there any perl modules hanging around to access the zfs administrative commands?! I wish to write some scripts to to some scheduled jobs with our ZFS systems; preferably in perl. But I found no perlmodules for zfs. I

[zfs-discuss] x4500 x2

2008-01-31 Thread Jorgen Lundman
If we were to get two x4500s, with the idea of keeping one as a passive standby (serious hardware failure) are there any clever solutions in doing so? We can not use ZFS itself, but rather zpool volumes, with UFS on-top. I assume there is no zpool send/recv (although, that would be pretty

Re: [zfs-discuss] x4500 x2

2008-01-31 Thread Mertol Ozyoney
Hi; Why don't you buy one X4500 and one X4500 motherboard as spare a long with a few cold standby drives. Best regards Mertol Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +90212335 Email [EMAIL PROTECTED]

Re: [zfs-discuss] x4500 x2

2008-01-31 Thread Shawn Ferry
On Jan 31, 2008, at 6:13 AM, Jorgen Lundman wrote: If we were to get two x4500s, with the idea of keeping one as a passive standby (serious hardware failure) are there any clever solutions in doing so? You should take a look at AVS, there are some ZFS and AVS demos online

Re: [zfs-discuss] x4500 x2

2008-01-31 Thread Victor Latushkin
Jorgen Lundman wrote: If we were to get two x4500s, with the idea of keeping one as a passive standby (serious hardware failure) are there any clever solutions in doing so? We can not use ZFS itself, but rather zpool volumes, with UFS on-top. I assume there is no zpool send/recv

Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-01-31 Thread Kyle McDonald
Gregory Perry wrote: Hello, I have a Dell 2950 with a Perc 5/i, two 300GB 15K SAS drives in a RAID0 array. I am considering going to ZFS and I would like to get some feedback about which situation would yield the highest performance: using the Perc 5/i to provide a hardware RAID0 that

[zfs-discuss] How to do zfs send to remote tape drive without intermediate file?

2008-01-31 Thread Sergey
Hi list, I'd like to be able to store zfs filesystems on a tape drive that is attached to another Solaris U4 x86 server. The idea is to use zfs send together with tar in order to get the list of the filesystems' snapshots stored on a tape and be able to perform a restore operation later. It's

Re: [zfs-discuss] How to do zfs send to remote tape drive without intermediate file?

2008-01-31 Thread J.P. King
Hi list, I'd like to be able to store zfs filesystems on a tape drive that is attached to another Solaris U4 x86 server. The idea is to use zfs send together with tar in order to get the list of the filesystems' snapshots stored on a tape and be able to perform a restore operation later.

Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-01-31 Thread Vincent Fox
I package up 5 or 6 disks into a RAID-5 LUN on our Sun 3510 and 2540 arrays. Then I use ZFS to RAID-10 these volumes. Safety first! Quite frankly I've had ENOUGH of rebuilding trashed filesystems. I am tired to chasing performance like it's the Holy Grail and shoving other considerations

Re: [zfs-discuss] How to do zfs send to remote tape drive without intermediate file?

2008-01-31 Thread Richard Elling
Sergey wrote: Hi list, I'd like to be able to store zfs filesystems on a tape drive that is attached to another Solaris U4 x86 server. The idea is to use zfs send together with tar in order to get the list of the filesystems' snapshots stored on a tape and be able to perform a restore

Re: [zfs-discuss] Sparc zfs root/boot status ?

2008-01-31 Thread Lori Alt
no. It is scheduled for U6. Lori Jesus Cea wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Lori Alt wrote: | zfs boot on sparc will not be putback on its own. | It will be putback with the rest of zfs boot support, | sometime around build 86. May I ask if ZFS boot will be available in

Re: [zfs-discuss] NFS performance on ZFS vs UFS

2008-01-31 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Tomas Ă–gren wrote: | To get similar (lower) consistency guarantees, try disabling ZIL.. | google://zil_disable .. This should up the speed, but might cause disk | corruption if the server crashes while a client is writing data.. (just | like with UFS)

Re: [zfs-discuss] How to do zfs send to remote tape drive without intermediate file?

2008-01-31 Thread Richard Elling
Richard Elling wrote: Sergey wrote: Hi list, I'd like to be able to store zfs filesystems on a tape drive that is attached to another Solaris U4 x86 server. The idea is to use zfs send together with tar in order to get the list of the filesystems' snapshots stored on a tape and be

[zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active

2008-01-31 Thread Hector De Jesus
Hello SUN gurus I do not know if this is supported, I have a created a zpool consisting of the SAN resources and created a zfs file system. Using third part software I have taken snapshots of all luns in the zfs pool. My question is in a recovery situation is there a way for me to mount the

Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-01-31 Thread Kyle McDonald
Vincent Fox wrote: When Sun starts selling good SAS JBOD boxes equipped with appropriate redundancies and a flash-drive or 2 for the ZIL I will definitely go that route. For now I have a bunch of existing Sun HW RAID arrays so I make use of them mainly to make sure I can package LUNs and

Re: [zfs-discuss] sharenfs with over 10000 file systems

2008-01-31 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Matthew Ahrens wrote: | I believe this is because sharemgr does an O(number of shares) operation | whenever you try to share/unshare anything (retrieving the list of shares | from the kernel to make sure that it isn't/is already shared). I couldn't |

Re: [zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active

2008-01-31 Thread Tim
On 1/31/08, Hector De Jesus [EMAIL PROTECTED] wrote: Hello SUN gurus I do not know if this is supported, I have a created a zpool consisting of the SAN resources and created a zfs file system. Using third part software I have taken snapshots of all luns in the zfs pool. My question is in a

Re: [zfs-discuss] I.O error: zpool metadata corrupted after powercut

2008-01-31 Thread kristof
I don't have an exact copy of the error, but the following message was reported by zpool status: Pool degraded. Meta data corrupted. Please restore pool from backup. All devices where online, but pool could not be imported. During import we got I/O error. Krdoor This message posted from

Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-01-31 Thread Vincent Fox
So the point is, a JBOD with a flash drive in one (or two to mirror the ZIL) of the slots would be a lot SIMPLER. We've all spent the last decade or two offloading functions into specialized hardware, that has turned into these massive unneccessarily complex things. I don't want to go to a

[zfs-discuss] simulating directio on zfs?

2008-01-31 Thread Andrew Robb
The big problem that I have with non-directio is that buffering delays program execution. When reading/writing files that are many times larger than RAM without directio, it is very apparent that system response drops through the floor- it can take several minutes for an ssh login to prompt for

Re: [zfs-discuss] I.O error: zpool metadata corrupted after powercut

2008-01-31 Thread Richard Elling
kristof wrote: I don't have an exact copy of the error, but the following message was reported by zpool status: Pool degraded. Meta data corrupted. Please restore pool from backup. All devices where online, but pool could not be imported. During import we got I/O error. zpool would

Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-01-31 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Vincent Fox wrote: | So the point is, a JBOD with a flash drive in one (or two to mirror the ZIL) of the slots would be a lot SIMPLER. I guess a USB pendrive would be slower than a harddisk. Bad performance for the ZIL. - -- Jesus Cea Avion

Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-01-31 Thread Kyle McDonald
Vincent Fox wrote: So the point is, a JBOD with a flash drive in one (or two to mirror the ZIL) of the slots would be a lot SIMPLER. We've all spent the last decade or two offloading functions into specialized hardware, that has turned into these massive unneccessarily complex things. I

Re: [zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active

2008-01-31 Thread Dave Lowenstein
Nope, doesn't work. Try presenting one of those lun snapshots to your host, run cfgadm -al, then run zpool import. #zpool import no pools available to import It would make my life so much simpler if you could do something like this: zpool import --import-as yourpool.backup yourpool

Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-01-31 Thread Torrey McMahon
Kyle McDonald wrote: Vincent Fox wrote: So the point is, a JBOD with a flash drive in one (or two to mirror the ZIL) of the slots would be a lot SIMPLER. We've all spent the last decade or two offloading functions into specialized hardware, that has turned into these massive

Re: [zfs-discuss] Sparc zfs root/boot status ?

2008-01-31 Thread Vincent Fox
zfs boot on sparc will not be putback on its own. It will be putback with the rest of zfs boot support, sometime around build 86. Does this still seem likely to occur, or will it be pushed back further? I see that build 81 is out today which means we are not far from seeing ZFS boot on

Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-01-31 Thread Marion Hakanson
[EMAIL PROTECTED] said: You still need interfaces, of some kind, to manage the device. Temp sensors? Drive fru information? All that information has to go out, and some in, over an interface of some sort. Looks like the Sun 2530 array recently added in-band management over the SAS (data)

Re: [zfs-discuss] Sparc zfs root/boot status ?

2008-01-31 Thread Lori Alt
Vincent Fox wrote: zfs boot on sparc will not be putback on its own. It will be putback with the rest of zfs boot support, sometime around build 86. Does this still seem likely to occur, or will it be pushed back further? I see that build 81 is out today which means we are not far

Re: [zfs-discuss] Sparc zfs root/boot status ?

2008-01-31 Thread Brian Hechinger
On Thu, Jan 31, 2008 at 03:15:30PM -0700, Lori Alt wrote: Does this still seem likely to occur, or will it be pushed back further? I see that build 81 is out today which means we are not far from seeing ZFS boot on Sparc in Nevada? The pressure to get this into build 86 is considerable

Re: [zfs-discuss] Sparc zfs root/boot status ?

2008-01-31 Thread Vincent Fox
Awesome work you and your team are doing. Thanks Lori! This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZIL controls in Solaris 10 U4?

2008-01-31 Thread Guanghui Wang
I dont know when will U5 or U6 coming,so i just set zfs_nocacheflush=1 to /etc/system,and the performance will speed up like zil_disable=1,and that's more safe for the filesystem. the separate zlog feature is not in U4,the nfs performance on zfs will be too slow when you do not set

Re: [zfs-discuss] ZIL controls in Solaris 10 U4?

2008-01-31 Thread Jonathan Loran
Guanghui Wang wrote: I dont know when will U5 or U6 coming,so i just set zfs_nocacheflush=1 to /etc/system,and the performance will speed up like zil_disable=1,and that's more safe for the filesystem. the separate zlog feature is not in U4,the nfs performance on zfs will be too slow