Re: [zfs-discuss] Running on Dell hardware?

2011-01-12 Thread Ben Rockwood
If you're still having issues go into the BIOS and disable C-States, if you haven't already. It is responsible for most of the problems with 11th Gen PowerEdge. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-14 Thread Ben Rockwood
On 8/14/10 1:12 PM, Frank Cusack wrote: Wow, what leads you guys to even imagine that S11 wouldn't contain comstar, etc.? *Of course* it will contain most of the bits that are current today in OpenSolaris. That's a very good question actually. I would think that COMSTAR would stay because

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-13 Thread Ben Rockwood
On 8/13/10 9:02 PM, C. Bergström wrote: Erast wrote: On 08/13/2010 01:39 PM, Tim Cook wrote: http://www.theregister.co.uk/2010/08/13/opensolaris_is_dead/ I'm a bit surprised at this development... Oracle really just doesn't get it. The part that's most disturbing to me is the fact they

Re: [zfs-discuss] ZFS Hard disk buffer at 100%

2010-05-09 Thread Ben Rockwood
The drive (c7t2d0)is bad and should be replaced. The second drive (c7t5d0) is either bad or going bad. This is exactly the kind of problem that can force a Thumper to it knees, ZFS performance is horrific, and as soon as you drop the bad disks things magicly return to normal. My first

Re: [zfs-discuss] Mirrored Servers

2010-05-08 Thread Ben Rockwood
On 5/8/10 3:07 PM, Tony wrote: Lets say I have two servers, both running opensolaris with ZFS. I basically want to be able to create a filesystem where the two servers have a common volume, that is mirrored between the two. Meaning, each server keeps an identical, real time backup of the

Re: [zfs-discuss] Plugging in a hard drive after Solaris has booted up?

2010-05-07 Thread Ben Rockwood
On 5/7/10 9:38 PM, Giovanni wrote: Hi guys, I have a quick question, I am playing around with ZFS and here's what I did. I created a storage pool with several drives. I unplugged 3 out of 5 drives from the array, currently: NAMESTATE READ WRITE CKSUM gpool

Re: [zfs-discuss] Benchmarking Methodologies

2010-04-21 Thread Ben Rockwood
On 4/21/10 2:15 AM, Robert Milkowski wrote: I haven't heard from you in a while! Good to see you here again :) Sorry for stating obvious but at the end of a day it depends on what your goals are. Are you interested in micro-benchmarks and comparison to other file systems? I think the most

[zfs-discuss] Benchmarking Methodologies

2010-04-20 Thread Ben Rockwood
I'm doing a little research study on ZFS benchmarking and performance profiling. Like most, I've had my favorite methods, but I'm re-evaluating my choices and trying to be a bit more scientific than I have in the past. To that end, I'm curious if folks wouldn't mind sharing their work on the

Re: [zfs-discuss] [Fwd: Re: [perf-discuss] ZFS performance issue - READ is slow as hell...]

2009-03-31 Thread Ben Rockwood
Ya, I agree that we need some additional data and testing. The iostat data in itself doesn't suggest to me that the process (dd) is slow but rather that most of the data is being retrieved elsewhere (ARC). An fsstat would be useful to correlate with the iostat data. One thing that also comes

[zfs-discuss] zdb to dump data

2008-10-30 Thread Ben Rockwood
Is there some hidden way to coax zdb into not just displaying data based on a given DVA but rather to dump it in raw usable form? I've got a pool with large amounts of corruption. Several directories are toast and I get I/O Error when trying to enter or read the directory... however I can

Re: [zfs-discuss] Lost Disk Space

2008-10-20 Thread Ben Rockwood
No takers? :) benr. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Lost Disk Space

2008-10-16 Thread Ben Rockwood
I've been struggling to fully understand why disk space seems to vanish. I've dug through bits of code and reviewed all the mails on the subject that I can find, but I still don't have a proper understanding of whats going on. I did a test with a local zpool on snv_97... zfs list, zpool

Re: [zfs-discuss] ARCSTAT Kstat Definitions

2008-08-21 Thread Ben Rockwood
Thanks, not as much as I was hoping for but still extremely helpful. Can you, or others have a look at this: http://cuddletech.com/arc_summary.html This is a PERL script that uses kstats to drum up a report such as the following: System Memory: Physical RAM: 32759 MB Free

Re: [zfs-discuss] ARCSTAT Kstat Definitions

2008-08-21 Thread Ben Rockwood
Its a starting point anyway. The key is to try and draw useful conclusions from the info to answer the torrent of why is my ARC 30GB??? There are several things I'm unclear on whether or not I'm properly interpreting such as: * As you state, the anon pages. Even the comment in code is, to

Re: [zfs-discuss] ARCSTAT Kstat Definitions

2008-08-21 Thread Ben Rockwood
New version is available (v0.2) : * Fixes divide by zero, * includes tuning from /etc/system in output * if prefetch is disabled I explicitly say so. * Accounts for jacked anon count. Still need improvement here. * Added friendly explanations for MRU/MFU Ghost lists counts. Page and

[zfs-discuss] ARCSTAT Kstat Definitions

2008-08-20 Thread Ben Rockwood
Would someone in the know be willing to write up (preferably blog) definitive definitions/explanations of all the arcstats provided via kstat? I'm struggling with proper interpretation of certain values, namely p, memory_throttle_count, and the mru/mfu+ghost hit vs demand/prefetch hit

Re: [zfs-discuss] How to delete hundreds of emtpy snapshots

2008-07-17 Thread Ben Rockwood
zfs list is mighty slow on systems with a large number of objects, but there is no foreseeable plan that I'm aware of to solve that problem. Never the less, you need to do a zfs list, therefore, do it once and work from that. zfs list /tmp/zfs.out for i in `grep mydataset@ /tmp/zfs.out`; do

[zfs-discuss] 40min ls in empty directory

2008-07-16 Thread Ben Rockwood
I've run into an odd problem which I lovingly refer to as a black hole directory. On a Thumper used for mail stores we've found find's take an exceptionally long time to run. There are directories that have as many as 400,000 files, which I immediately considered the culprit. However,

[zfs-discuss] ZFS and ACL's over NFSv3

2008-06-05 Thread Ben Rockwood
Can someone please clarify the ability to utilize ACL's over NFSv3 from a ZFS share? I can getfacl but I can't setfacl. I can't find any documentation in this regard. My suspicion is that that ZFS Shares must be NFSv4 in order to utilize ACLs but I'm hoping this isn't the case. Can anyone

Re: [zfs-discuss] Panic on Zpool Import (Urgent)

2008-01-17 Thread Ben Rockwood
sata:sata_max_queue_depth = 0x1 If you don't life will be highly unpleasant and you'll believe that disks are failing everywhere when in fact they are not. benr. Ben Rockwood wrote: Today, suddenly, without any apparent reason that I can find, I'm getting panic's during zpool import. The system paniced

Re: [zfs-discuss] Removing An Errant Drive From Zpool

2008-01-16 Thread Ben Rockwood
Robert Milkowski wrote: If you can't re-create a pool (+backuprestore your data) I would recommend to wait for device removal in zfs and in a mean time I would attach another drive to it so you've got mirrored configuration and remove them once there's a device removal. Since you're already

[zfs-discuss] Removing An Errant Drive From Zpool

2008-01-15 Thread Ben Rockwood
I made a really stupid mistake... having trouble removing a hot spare marked as failed I was trying several ways to put it back in a good state. One means I tried was to 'zpool add pool c5t3d0'... but I forgot to use the proper syntax zpool add pool spare c5t3d0. Now I'm in a bind. I've got

Re: [zfs-discuss] Removing An Errant Drive From Zpool

2008-01-15 Thread Ben Rockwood
Eric Schrock wrote: There's really no way to recover from this, since we don't have device removal. However, I'm suprised that no warning was given. There are at least two things that should have happened: 1. zpool(1M) should have warned you that the redundancy level you were

[zfs-discuss] Panic on Zpool Import (Urgent)

2008-01-12 Thread Ben Rockwood
Today, suddenly, without any apparent reason that I can find, I'm getting panic's during zpool import. The system paniced earlier today and has been suffering since. This is snv_43 on a thumper. Here's the stack: panic[cpu0]/thread=99adbac0: assertion failed: ss != NULL, file:

[zfs-discuss] ZFS Quota Oddness

2007-10-31 Thread Ben Rockwood
I've run across an odd issue with ZFS Quota's. This is an snv_43 system with several zones/zfs datasets, but only one effected. The dataset shows 10GB used, 12GB refered but when counting the files only has 6.7GB of data: zones/ABC10.8G 26.2G 12.0G /zones/ABC zones/[EMAIL

Re: [zfs-discuss] When I stab myself with this knife, it hurts... But - should it kill me?

2007-10-04 Thread Ben Rockwood
Dick Davies wrote: On 04/10/2007, Nathan Kroenert [EMAIL PROTECTED] wrote: Client A - import pool make couple-o-changes Client B - import pool -f (heh) Oct 4 15:03:12 fozzie ^Mpanic[cpu0]/thread=ff0002b51c80: Oct 4 15:03:12 fozzie genunix: [ID 603766 kern.notice]

Re: [zfs-discuss] ZFS for OSX - it'll be in there.

2007-10-04 Thread Ben Rockwood
Dale Ghent wrote: ...and eventually in a read-write capacity: http://www.macrumors.com/2007/10/04/apple-seeds-zfs-read-write- developer-preview-1-1-for-leopard/ Apple has seeded version 1.1 of ZFS (Zettabyte File System) for Mac OS X to Developers this week. The preview updates a

Re: [zfs-discuss] ZFS, iSCSI + Mac OS X Tiger (globalSAN iSCSI)

2007-07-05 Thread Ben Rockwood
George wrote: I have set up an iSCSI ZFS target that seems to connect properly from the Microsoft Windows initiator in that I can see the volume in MMC Disk Management. When I shift over to Mac OS X Tiger with globalSAN iSCSI, I am able to set up the Targets with the target name shown

[zfs-discuss] ZVol Panic on 62

2007-05-25 Thread Ben Rockwood
May 25 23:32:59 summer unix: [ID 836849 kern.notice] May 25 23:32:59 summer ^Mpanic[cpu1]/thread=1bf2e740: May 25 23:32:59 summer genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=ff00232c3a80 addr=490 occurred in module unix due to a NULL pointer dereference May

Re: [zfs-discuss] New zfs pr0n server :)))

2007-05-19 Thread Ben Rockwood
Diego Righi wrote: Hi all, I just built a new zfs server for home and, being a long time and avid reader of this forum, I'm going to post my config specs and my benchmarks hoping this could be of some help for others :) http://www.sickness.it/zfspr0nserver.jpg

Re: [zfs-discuss] snapdir visable recursively throughout a dataset

2007-02-06 Thread Ben Rockwood
Darren J Moffat wrote: Ben Rockwood wrote: Robert Milkowski wrote: I haven't tried it but what if you mounted ro via loopback into a zone /zones/myzone01/root/.zfs is loop mounted in RO to /zones/myzone01/.zfs That is so wrong. ;) Besides just being evil, I doubt it'd

[zfs-discuss] Read Only Zpool: ZFS and Replication

2007-02-05 Thread Ben Rockwood
I've been playing with replication of a ZFS Zpool using the recently released AVS. I'm pleased with things, but just replicating the data is only part of the problem. The big question is: can I have a zpool open in 2 places? What I really want is a Zpool on node1 open and writable

[zfs-discuss] snapdir visable recursively throughout a dataset

2007-02-05 Thread Ben Rockwood
Is there an existing RFE for, what I'll wrongly call, recursively visable snapshots? That is, .zfs in directories other than the dataset root. Frankly, I don't need it available in all directories, although it'd be nice, but I do have a need for making it visiable 1 dir down from the dataset

Re: [zfs-discuss] zfs / nfs issue (not performance :-) with courier-imap

2007-01-25 Thread Ben Rockwood
Robert Milkowski wrote: CLSNL but if I click, say E, it has F's contents, F has Gs contents, and no CLSNL mail has D's contents that I can see. But the list in the mail CLSNL client list view is correct. I don't belive it's a problem with nfs/zfs server. Please try with simple dtrace script

Re: [zfs-discuss] ZFS over NFS extra slow?

2007-01-02 Thread Ben Rockwood
Brad Plecs wrote: I had a user report extreme slowness on a ZFS filesystem mounted over NFS over the weekend. After some extensive testing, the extreme slowness appears to only occur when a ZFS filesystem is mounted over NFS. One example is doing a 'gtar xzvf php-5.2.0.tar.gz'... over NFS

Re: [zfs-discuss] ZFS

2006-12-20 Thread Ben Rockwood
Andrew Summers wrote: So, I've read the wikipedia, and have done a lot of research on google about it, but it just doesn't make sense to me. Correct me if I'm wrong, but you can take a simple 5/10/20 GB drive or whatever size, and turn it into exabytes of storage space? If that is not

Re: [zfs-discuss] ZFS works in waves

2006-12-15 Thread Ben Rockwood
Stuart Glenn wrote: A little back story: I have a Norco DS-1220, a 12 bay SATA box, it is connected to eSATA (SiI3124) via PCI-X two drives are straight connections, then the other two ports go to 5x multipliers within the box. My needs/hopes for this was using 12 500GB drives and ZFS make a

Re: [nfs-discuss] Re: [zfs-discuss] A Plea for Help: Thumper/ZFS/NFS/B43

2006-12-11 Thread Ben Rockwood
Robert Milkowski wrote: Hello eric, Saturday, December 9, 2006, 7:07:49 PM, you wrote: ek Jim Mauro wrote: Could be NFS synchronous semantics on file create (followed by repeated flushing of the write cache). What kind of storage are you using (feel free to send privately if you need to)

Re: [zfs-discuss] A Plea for Help: Thumper/ZFS/NFS/B43

2006-12-09 Thread Ben Rockwood
Spencer Shepler wrote: Good to hear that you have figured out what is happening, Ben. For future reference, there are two commands that you may want to make use of in observing the behavior of the NFS server and individual filesystems. There is the trusty, nfsstat command. In this case, you

Re: [zfs-discuss] A Plea for Help: Thumper/ZFS/NFS/B43

2006-12-09 Thread Ben Rockwood
Bill Moore wrote: On Fri, Dec 08, 2006 at 12:15:27AM -0800, Ben Rockwood wrote: Clearly ZFS file creation is just amazingly heavy even with ZIL disabled. If creating 4,000 files in a minute squashes 4 2.6Ghz Opteron cores we're in big trouble in the longer term. In the meantime I'm going

Re: [zfs-discuss] A Plea for Help: Thumper/ZFS/NFS/B43

2006-12-08 Thread Ben Rockwood
eric kustarz wrote: So i'm guessing there's lots of files being created over NFS in one particular dataset? We should figure out how many creates/second you are doing over NFS (i should have put a timeout on the script). Here's a real simple one (from your snoop it looked like you're only

[zfs-discuss] A Plea for Help: Thumper/ZFS/NFS/B43

2006-12-07 Thread Ben Rockwood
I've got a Thumper doing nothing but serving NFS. Its using B43 with zil_disabled. The system is being consumed in waves, but by what I don't know. Notice vmstat: 3 0 0 25693580 2586268 0 0 0 0 0 0 0 0 0 0 0 926 91 703 0 25 75 21 0 0 25693580 2586268 0 0 0 0 0 0 0 0 0

[zfs-discuss] Re: NFS Performance and Tar

2006-10-03 Thread Ben Rockwood
I was really hoping for some option other than ZIL_DISABLE, but finally gave up the fight. Some people suggested NFSv4 helping over NFSv3 but it didn't... at least not enough to matter. ZIL_DISABLE was the solution, sadly. I'm running B43/X86 and hoping to get up to 48 or so soonish (I BFU'd