Re: [zfs-discuss] Fwd: ZFS for consumers WAS:Yager on ZFS

2007-11-19 Thread Ian Collins
Paul Kraus wrote: I also like being able to see how much space I am using for each with a simple df rather than a du (that takes a while to run). I can also tune compression on a data type basis (no real point in trying to compress media files that are already compressed MPEG and

Re: [zfs-discuss] pls discontinue troll bait was: Yager on ZFS and

2007-11-19 Thread can you guess?
OTOH, when someone whom I don't know comes across as a pushover, he loses credibility. It may come as a shock to you, but some people couldn't care less about those who assess 'credibility' on the basis of form rather than on the basis of content - which means that you can either lose out

Re: [zfs-discuss] Fwd: ZFS for consumers WAS:Yager on ZFS

2007-11-19 Thread Mario Goebbels
For a home user, data integrity is probably as, if not more, important than for a corporate user. How many home users do regular backups? I'm a heavy computer user and probably passed the 500GB mark way before most other home users, did various stunts like running a RAID0 on IBM Deathstars,

[zfs-discuss] Recommended settings for dom0_mem when using zfs

2007-11-19 Thread K
I have a xVm b75 server and use zfs for storage (zfs root mirror and a raid-z2 datapool.) I see everywhere that it is recommended to have a lot of memory on a zfs file server... but I also need to relinquish a lot of my memory to be used by the domUs. What would a good value for dom0_mem

Re: [zfs-discuss] [xen-discuss] Recommended settings for dom0_mem when using zfs

2007-11-19 Thread Mark Johnson
K wrote: I have a xVm b75 server and use zfs for storage (zfs root mirror and a raid-z2 datapool.) I see everywhere that it is recommended to have a lot of memory on a zfs file server... but I also need to relinquish a lot of my memory to be used by the domUs. What would a good

Re: [zfs-discuss] Fwd: ZFS for consumers WAS:Yager on ZFS

2007-11-19 Thread Paul Kraus
On 11/19/07, Ian Collins [EMAIL PROTECTED] wrote: For a home user, data integrity is probably as, if not more, important than for a corporate user. How many home users do regular backups? Let me correct a point I made badly the first time around, I value the data integrity provided by

Re: [zfs-discuss] zfs on a raid box

2007-11-19 Thread Dan Pritts
On Mon, Nov 19, 2007 at 11:10:32AM +0100, Paul Boven wrote: Any suggestions on how to further investigate / fix this would be very much welcomed. I'm trying to determine whether this is a zfs bug or one with the Transtec raidbox, and whether to file a bug with either Transtec (Promise) or zfs.

Re: [zfs-discuss] slog tests on read throughput exhaustion (NFS)

2007-11-19 Thread Roch - PAE
Neil Perrin writes: Joe Little wrote: On Nov 16, 2007 9:13 PM, Neil Perrin [EMAIL PROTECTED] wrote: Joe, I don't think adding a slog helped in this case. In fact I believe it made performance worse. Previously the ZIL would be spread out over all devices but now all

Re: [zfs-discuss] slog tests on read throughput exhaustion (NFS)

2007-11-19 Thread Neil Perrin
Roch - PAE wrote: Neil Perrin writes: Joe Little wrote: On Nov 16, 2007 9:13 PM, Neil Perrin [EMAIL PROTECTED] wrote: Joe, I don't think adding a slog helped in this case. In fact I believe it made performance worse. Previously the ZIL would be spread out over

[zfs-discuss] raidz2

2007-11-19 Thread Brian Lionberger
If I yank out a disk in a raidz2 4 disk array, shouldn't the other disks pick up without any errrors? I have a 3120 JBOD and I went and yanked out a disk and the everything got hosed. It's okay, because I'm just testing stuff and wanted to see raidz2 in action when a disk goes down. Am I

Re: [zfs-discuss] raidz2

2007-11-19 Thread Eric Schrock
On Mon, Nov 19, 2007 at 04:33:26PM -0700, Brian Lionberger wrote: If I yank out a disk in a raidz2 4 disk array, shouldn't the other disks pick up without any errrors? I have a 3120 JBOD and I went and yanked out a disk and the everything got hosed. It's okay, because I'm just testing stuff

Re: [zfs-discuss] pls discontinue troll bait was: Yager on ZFS and

2007-11-19 Thread can you guess?
: Big talk from someone who seems so intent on hiding : their credentials. : Say, what? Not that credentials mean much to me since I evaluate people : on their actual merit, but I've not been shy about who I am (when I : responded 'can you guess?' in registering after giving billtodd as

[zfs-discuss] Oops (accidentally deleted replaced drive)

2007-11-19 Thread Albert Chin
Running ON b66 and had a drive fail. Ran 'zfs replace' and resilvering began. But, accidentally deleted the replacement drive on the array via CAM. # zpool status -v ... raidz2 DEGRADED 0 0 0

Re: [zfs-discuss] Oops (accidentally deleted replaced drive)

2007-11-19 Thread Eric Schrock
You should be able to do a 'zpool detach' of the replacement and then try again. - Eric On Mon, Nov 19, 2007 at 08:20:04PM -0600, Albert Chin wrote: Running ON b66 and had a drive fail. Ran 'zfs replace' and resilvering began. But, accidentally deleted the replacement drive on the array via

Re: [zfs-discuss] Recommended many-port SATA controllers for budget ZFS

2007-11-19 Thread Brian Hechinger
On Sun, Nov 18, 2007 at 02:18:21PM +0100, Peter Schuller wrote: Right now I have noticed that LSI has recently began offering some lower-budget stuff; specifically I am looking at the MegaRAID SAS 8208ELP/XLP, which are very reasonably priced. I looked up the 8204XLP, which is really quite

[zfs-discuss] reslivering

2007-11-19 Thread Tim Cook
So... issues with reslivering yet again. This is ~3TB pool. I have one raid-z of 5 500GB disks, and a second pool of 3 300GB disks. One of the 300GB disks failed, so I have replaced the drive. After doing the resliver, it takes approximately 5 minutes for it to complete 68.05% of the

Re: [zfs-discuss] Oops (accidentally deleted replaced drive)

2007-11-19 Thread Albert Chin
On Mon, Nov 19, 2007 at 06:23:01PM -0800, Eric Schrock wrote: You should be able to do a 'zpool detach' of the replacement and then try again. Thanks. That worked. - Eric On Mon, Nov 19, 2007 at 08:20:04PM -0600, Albert Chin wrote: Running ON b66 and had a drive fail. Ran 'zfs replace'

[zfs-discuss] Indexing other than hash tables

2007-11-19 Thread James Cone
Hello All, Mike Speirs at Sun in New Zealand pointed me toward you-all. I have several sets of questions, so I plan to group them and send several emails. This question is about the name/attribute mapping layer in ZFS. In the last version of the source-code that I read, it provides

Re: [zfs-discuss] reslivering

2007-11-19 Thread Tim Cook
After messing around... who knows what's going on with it now. Finally rebooted because I was sick of it hanging. After that, this is what it came back with: root:= zpool status pool: fserv state: DEGRADED status: One or more devices is currently being resilvered. The pool will

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-19 Thread Richard Elling
James Cone wrote: Hello All, Here's a possibly-silly proposal from a non-expert. Summarising the problem: - there's a conflict between small ZFS record size, for good random update performance, and large ZFS record size for good sequential read performance Poor sequential read

Re: [zfs-discuss] reslivering

2007-11-19 Thread Tim Cook
That locked up pretty quickly as well, one more reboot and this is what I'm seeing now: root:= zpool status pool: fserv state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the

Re: [zfs-discuss] zpool io to 6140 is really slow

2007-11-19 Thread Richard Elling
Asif Iqbal wrote: I have the following layout A 490 with 8 1.8Ghz and 16G mem. 6 6140s with 2 FC controllers using A1 anfd B1 controller port 4Gbps speed. Each controller has 2G NVRAM On 6140s I setup raid0 lun per SAS disks with 16K segment size. On 490 I created a zpool with 8 4+1

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-19 Thread can you guess?
Regardless of the merit of the rest of your proposal, I think you have put your finger on the core of the problem: aside from some apparent reluctance on the part of some of the ZFS developers to believe that any problem exists here at all (and leaving aside the additional monkey wrench that

Re: [zfs-discuss] [storage-discuss] zpool io to 6140 is really slow

2007-11-19 Thread Asif Iqbal
On Nov 19, 2007 1:43 AM, Louwtjie Burger [EMAIL PROTECTED] wrote: On Nov 17, 2007 9:40 PM, Asif Iqbal [EMAIL PROTECTED] wrote: (Including storage-discuss) I have 6 6140s with 96 disks. Out of which 64 of them are Seagate ST337FC (300GB - 1 RPM FC-AL) Those disks are 2Gb disks,