Re: [zfs-discuss] Current status of a ZFS root

2006-10-30 Thread Ceri Davies
On Sun, Oct 29, 2006 at 12:01:45PM -0800, Richard Elling - PAE wrote: Chris Adams wrote: We're looking at replacing a current Linux server with a T1000 + a fiber channel enclosure to take advantage of ZFS. Unfortunately, the T1000 only has a single drive bay (!) which makes it impossible to

Re: [zfs-discuss] Re: copying a large file..

2006-10-30 Thread Jeremy Teo
This is the same problem described in 6343653 : want to quickly copy a file from a snapshot. On 10/30/06, eric kustarz [EMAIL PROTECTED] wrote: Pavan Reddy wrote: This is the time it took to move the file: The machine is a Intel P4 - 512MB RAM. bash-3.00# time mv ../share/pav.tar . real

[zfs-discuss] Re: Very high system loads with ZFS

2006-10-30 Thread Peter Guthrie
Thanks for the reply, I heard separately that it's fixed in snv_52, don't know if it'll be available as a ZFS patch or in s10u3. Pete This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] recover zfs data from a crashed system?

2006-10-30 Thread senthil ramanujam
Hi, I am trying to experiment a scenario that we would like to find a possible solution. Is there anyone out there experienced or analyzed before the scenario given below? Scenario: The system is attached to an array. The array type is really doesn't matter, i,e., it can be a JBOD or a RAID

Re: [zfs-discuss] recover zfs data from a crashed system?

2006-10-30 Thread Robert Milkowski
Hello senthil, Monday, October 30, 2006, 1:12:28 PM, you wrote: sr Hi, sr I am trying to experiment a scenario that we would like to find a sr possible solution. Is there anyone out there experienced or analyzed sr before the scenario given below? sr Scenario: The system is attached to an

Re: [zfs-discuss] recover zfs data from a crashed system?

2006-10-30 Thread Michael Schuster
senthil ramanujam wrote: Hi, I am trying to experiment a scenario that we would like to find a possible solution. Is there anyone out there experienced or analyzed before the scenario given below? Scenario: The system is attached to an array. The array type is really doesn't matter, i,e., it

Re: [zfs-discuss] recover zfs data from a crashed system?

2006-10-30 Thread senthil ramanujam
Thanks Robert, Michael. I guess that has answered my question. I now have got to do a couple of experiments and get this under control. I will keep you posted if I see something strange, which I don't hope for. ;o) senthil On 10/30/06, Michael Schuster [EMAIL PROTECTED] wrote: senthil

Re: [zfs-discuss] Current status of a ZFS root

2006-10-30 Thread Richard Elling - PAE
[Richard removes his Sun hat...] Ceri Davies wrote: On Sun, Oct 29, 2006 at 12:01:45PM -0800, Richard Elling - PAE wrote: Chris Adams wrote: We're looking at replacing a current Linux server with a T1000 + a fiber channel enclosure to take advantage of ZFS. Unfortunately, the T1000 only has

[zfs-discuss] Re: Re: panic during recv

2006-10-30 Thread Gary Mitchell
I don't have the crashes anymore! What I did was on the receiving pool explicitly set mountpoint=none so that on the receiving side the filesystem is never mounted. Now this shouldn't make a difference. From what I saw before - and If i've understood the documentation - when you do have the

[zfs-discuss] Re: Recommended Minimum Hardware for ZFS Fileserver?

2006-10-30 Thread Wes Williams
Thanks gents for your replies. I've used to a very large config W2100z and ZFS for awhile but didn't know how low can you go for ZFS to shine, though a 64-bit CPU seems to be the minimum performance threshold. Now that Sun's store is [sort of] working again, I can see some X2100's with the

[zfs-discuss] Re: [osol-discuss] Cloning a disk w/ ZFS in it

2006-10-30 Thread Asif Iqbal
On 10/20/06, Darren J Moffat [EMAIL PROTECTED] wrote: Asif Iqbal wrote: On 10/20/06, Darren J Moffat [EMAIL PROTECTED] wrote: Asif Iqbal wrote: Hi I have a X2100 with two 74G disks. I build the OS on the first disk with slice0 root 10G ufs, slice1 2.5G swap, slice6 25MB ufs and slice7

Re: [zfs-discuss] Re: copying a large file..

2006-10-30 Thread Matthew Ahrens
Jeremy Teo wrote: This is the same problem described in 6343653 : want to quickly copy a file from a snapshot. Actually it's a somewhat different problem. Copying a file from a snapshot is a lot simpler than copying a file from a different filesystem. With snapshots, things are a lot more

Re: [zfs-discuss] thousands of ZFS file systems

2006-10-30 Thread Erblichs
Hi, My suggestion is direct any command output to a file that may print thous of lines. I have not tried that number of FSs. So, my first suggestion is to have alot of phys mem installed. The second item that I could be concerned with is path

Re: [zfs-discuss] Re: Recommended Minimum Hardware for ZFS Fileserver?

2006-10-30 Thread Richard Elling - PAE
Wes Williams wrote: Thanks gents for your replies. I've used to a very large config W2100z and ZFS for awhile but didn't know how low can you go for ZFS to shine, though a 64-bit CPU seems to be the minimum performance threshold. Now that Sun's store is [sort of] working again, I can see

[zfs-discuss] ZFS thinks my 7-disk pool has imaginary disks

2006-10-30 Thread Rince
Hi all,I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the following command:# zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 c5t6d0It worked fine, but I was slightly confused by the size yield (99 GB vs the 116 GB I had on my other RAID-Z1 pool of

Re: [zfs-discuss] Re: Recommended Minimum Hardware for ZFS Fileserver?

2006-10-30 Thread Bart Smaalders
Wes Williams wrote: Thanks gents for your replies. I've used to a very large config W2100z and ZFS for awhile but didn't know how low can you go for ZFS to shine, though a 64-bit CPU seems to be the minimum performance threshold. Now that Sun's store is [sort of] working again, I can see

[zfs-discuss] ZFS Performance Question

2006-10-30 Thread Jay Grogan
Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system. command ran mkfile -v 6gb /ufs/tmpfile Test 1 UFS mounted LUN (2m2.373s) Test 2 UFS mounted LUN with directio option (5m31.802s) Test 3 ZFS LUN (Single LUN in a pool) (3m13.126s) Sunfire V120 1 Qlogic 2340 Solaris 10

[zfs-discuss] Re: Recommended Minimum Hardware for ZFS Fileserver?

2006-10-30 Thread Wes Williams
Thanks again for your input Gents, I was able to get a W1100z inexpensively with 1Gb RAM and a 2.4 GHz Opteron...now I'll just have to manufacture my own drive slide rails since Sun won't sell the darn things [no, I don't want a 80Gb IDE drive and apple pie with that!] and I'm not paying $100

[zfs-discuss] Re: recover zfs data from a crashed system?

2006-10-30 Thread Jason Williams
Hi Senthil, We experienced a situation very close to this. Due to some instabilities, we weren't able to export the zpool safely from the distressed system (a T2000 running SXb41). The only free system we had was an X4100, which was running S10 6/06. Both were SAN attached. The filesystem

Re: [zfs-discuss] ZFS Performance Question

2006-10-30 Thread David Dyer-Bennet
On 10/30/06, Jay Grogan [EMAIL PROTECTED] wrote: Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system. command ran mkfile -v 6gb /ufs/tmpfile Test 1 UFS mounted LUN (2m2.373s) Test 2 UFS mounted LUN with directio option (5m31.802s) Test 3 ZFS LUN (Single LUN in a pool)

Re: [zfs-discuss] ZFS Performance Question

2006-10-30 Thread Chad Leigh -- Shire.Net LLC
On Oct 30, 2006, at 10:45 PM, David Dyer-Bennet wrote: Also, stacking it on top of an existing RAID setup is kinda missing the entire point! Everyone keeps saying this, but I don't think it is missing the point at all. Checksumming and all the other goodies still work fine and you can

Re: [zfs-discuss] thousands of ZFS file systems

2006-10-30 Thread Cyril Plisko
On 10/30/06, Robert Milkowski [EMAIL PROTECTED] wrote: 1. rebooting server could take several hours right now with so many file system I belive this problem is being addressed right now Well, I've done a quick test on b50 - 10K filesystems took around 5 minutes to boot. Not bad,