[zfs-discuss] df -e in ZFS

2006-11-09 Thread John Cui
Hi all, When testing our programs, I got a problem. On UFS, we get the number of free inode via 'df -e', then do some things based this value, such as create an empty file, the value will decrease by 1. But on ZFS, it does not work. I still can get a number via 'df -e', and create a same empty

Re: [zfs-discuss] df -e in ZFS

2006-11-09 Thread Robert Milkowski
Hello John, Thursday, November 9, 2006, 12:03:58 PM, you wrote: JC Hi all, JC When testing our programs, I got a problem. On UFS, we get the number of JC free inode via 'df -e', then do some things based this value, such as JC create an empty file, the value will decrease by 1. But on ZFS, it

Re: [zfs-discuss] df -e in ZFS

2006-11-09 Thread Mark Maybee
Robert Milkowski wrote: Hello John, Thursday, November 9, 2006, 12:03:58 PM, you wrote: JC Hi all, JC When testing our programs, I got a problem. On UFS, we get the number of JC free inode via 'df -e', then do some things based this value, such as JC create an empty file, the value will

[zfs-discuss] zfs+stripe detach

2006-11-09 Thread flama
Hi people, Is possible detach a device from a stripe zfs without to destroy the pool?. Zfs is similar to doms in tru64, and it have un detach device from stripe, and it realloc the space of the datasets in free disks. thx. This message posted from opensolaris.org

[zfs-discuss] Some performance questions with ZFS/NFS/DNLC at snv_48

2006-11-09 Thread Tomas Ögren
Hello. We're currently using a Sun Blade1000 (2x750MHz, 1G ram, 2x160MB/s mpt scsi buses, skge GigE network) as a NFS backend with ZFS for distribution of free software like Debian (cdimage.debian.org, ftp.se.debian.org) and have run into some performance issues. We are running SX snv_48 and

[zfs-discuss] CR 6483250 closed: will not fix

2006-11-09 Thread Richard Elling - PAE
ZFS fans, Recalling our conversation about hot-plug and hot-swap terminology and use, I afraid to say that CR 6483250 has been closed as will-not-fix. No explaination was given. If you feel strongly about this, please open another CR and pile on. *Change Request ID*: 6483250 *Synopsis*: X2100

Re: [zfs-discuss] CR 6483250 closed: will not fix

2006-11-09 Thread Torrey McMahon
Richard Elling - PAE wrote: ZFS fans, Recalling our conversation about hot-plug and hot-swap terminology and use, I afraid to say that CR 6483250 has been closed as will-not-fix. No explaination was given. A bug that is closed will-not-fix should, at the very least, have some rationale as

[zfs-discuss] Dead drives and ZFS

2006-11-09 Thread Rainer Heilke
Greetings, all. I put myself into a bit of a predicament, and I'm hoping there's a way out. I had a drive (EIDE) in a ZFS mirror die on me. Not a big deal, right? Well, I bought two SATA drives to build a new mirror. Since they were about the same size (I wanted bigger drives, but they were

Re: [zfs-discuss] raid-z random read performance

2006-11-09 Thread Adam Leventhal
I don't think you'd see the same performance benefits on RAID-Z since parity isn't always on the same disk. Are you seeing hot/cool disks? Adam On Sun, Nov 05, 2006 at 04:03:18PM +0100, Pawel Jakub Dawidek wrote: In my opinion RAID-Z is closer to RAID-3 than to RAID-5. In RAID-3 you do only

Re: [zfs-discuss] Some performance questions with ZFS/NFS/DNLC at snv_48

2006-11-09 Thread Neil Perrin
Tomas Ögren wrote On 11/09/06 09:59,: 1. DNLC-through-ZFS doesn't seem to listen to ncsize. The filesystem currently has ~550k inodes and large portions of it is frequently looked over with rsync (over nfs). mdb said ncsize was about 68k and vmstat -s said we had a hitrate of ~30%, so I set

[zfs-discuss] Re: df -e in ZFS

2006-11-09 Thread Anton B. Rang
A UFS file system has a fixed number of inodes, set when the file system is created. df can simply report how many of those have been used, and how many are free. Most file systems, including ZFS and QFS, allocate inodes dynamically. In this case, there really isn’t a “number of files free”

Re: [zfs-discuss] zfs+stripe detach

2006-11-09 Thread Cindy Swearingen
Hi-- ZFS stripes data across all pool configurations but you can only detach a device from mirrored storage pool. For more information, see this section: http://docs.sun.com/app/docs/doc/817-2271/6mhupg6ft?a=view However, figuring out that this operation is only supported in a mirrored

Re: [zfs-discuss] raid-z random read performance

2006-11-09 Thread Darren Dunham
I don't think you'd see the same performance benefits on RAID-Z since parity isn't always on the same disk. Are you seeing hot/cool disks? In addition, doesn't it always have to read all columns so that the parity can be validated? -- Darren Dunham

Re: [zfs-discuss] Some performance questions with ZFS/NFS/DNLC at snv_48

2006-11-09 Thread eric kustarz
Neil Perrin wrote: Tomas Ögren wrote On 11/09/06 09:59,: 1. DNLC-through-ZFS doesn't seem to listen to ncsize. The filesystem currently has ~550k inodes and large portions of it is frequently looked over with rsync (over nfs). mdb said ncsize was about 68k and vmstat -s said we had a

Re: [zfs-discuss] Some performance questions with ZFS/NFS/DNLC at snv_48

2006-11-09 Thread Tomas Ögren
On 09 November, 2006 - Neil Perrin sent me these 1,6K bytes: Tomas Ögren wrote On 11/09/06 09:59,: 1. DNLC-through-ZFS doesn't seem to listen to ncsize. The filesystem currently has ~550k inodes and large portions of it is frequently looked over with rsync (over nfs). mdb said ncsize

Re: [zfs-discuss] raid-z random read performance

2006-11-09 Thread Tomas Ögren
On 09 November, 2006 - Darren Dunham sent me these 0,7K bytes: I don't think you'd see the same performance benefits on RAID-Z since parity isn't always on the same disk. Are you seeing hot/cool disks? In addition, doesn't it always have to read all columns so that the parity can be

Re: [zfs-discuss] Some performance questions with ZFS/NFS/DNLC at snv_48

2006-11-09 Thread Brian Wong
eric kustarz wrote: If the ARC detects low memory (via arc_reclaim_needed()), then we call arc_kmem_reap_now() and subsequently dnlc_reduce_cache() - which reduces the # of dnlc entries by 3% (ARC_REDUCE_DNLC_PERCENT). So yeah, dnlc_nentries would be really interesting to see (especially

Re: [zfs-discuss] Some performance questions with ZFS/NFS/DNLC at snv_48

2006-11-09 Thread Tomas Ögren
On 09 November, 2006 - Tomas Ögren sent me these 4,4K bytes: On 09 November, 2006 - Neil Perrin sent me these 1,6K bytes: nfs does have a maximum nmber of rnodes which is calculated from the memory available. It doesn't look like nrnode_max can be overridden. rnode seems to take 472

Re: [zfs-discuss] Some performance questions with ZFS/NFS/DNLC at snv_48

2006-11-09 Thread eric kustarz
Brian Wong wrote: eric kustarz wrote: If the ARC detects low memory (via arc_reclaim_needed()), then we call arc_kmem_reap_now() and subsequently dnlc_reduce_cache() - which reduces the # of dnlc entries by 3% (ARC_REDUCE_DNLC_PERCENT). So yeah, dnlc_nentries would be really interesting

Re: [zfs-discuss] zfs+stripe detach

2006-11-09 Thread Robert Milkowski
Hello flama, Thursday, November 9, 2006, 5:44:36 PM, you wrote: f Hi people, f Is possible detach a device from a stripe zfs without to destroy the pool?. f Zfs is similar to doms in tru64, and it have un detach device from f stripe, and it realloc the space of the datasets in free disks. Not

Re[2]: [zfs-discuss] Some performance questions with ZFS/NFS/DNLC at snv_48

2006-11-09 Thread Robert Milkowski
Hello Tomas, Thursday, November 9, 2006, 9:47:17 PM, you wrote: TÖ On 09 November, 2006 - Neil Perrin sent me these 1,6K bytes: TÖ Current memory usage (for some values of usage ;): TÖ # echo ::memstat|mdb -k TÖ Page SummaryPagesMB %Tot TÖ

Re[2]: [zfs-discuss] # devices in raidz.

2006-11-09 Thread Robert Milkowski
Hello Richard, Tuesday, November 7, 2006, 5:19:07 PM, you wrote: REP Robert Milkowski wrote: Saturday, November 4, 2006, 12:46:05 AM, you wrote: REP Incidentally, since ZFS schedules the resync iops itself, then it can REP really move along on a mostly idle system. You should be able to

Re: [zfs-discuss] Some performance questions with ZFS/NFS/DNLC at snv_48

2006-11-09 Thread Neil Perrin
Tomas Ögren wrote On 11/09/06 13:47,: On 09 November, 2006 - Neil Perrin sent me these 1,6K bytes: Tomas Ögren wrote On 11/09/06 09:59,: 1. DNLC-through-ZFS doesn't seem to listen to ncsize. The filesystem currently has ~550k inodes and large portions of it is frequently looked

Re[2]: [zfs-discuss] zfs mount stuck in zil_replay

2006-11-09 Thread Robert Milkowski
Hello Neil, I can see http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6478388 integrated. I guess it could be related to problem I described here, right? -- Best regards, Robertmailto:[EMAIL PROTECTED]

Re: [zfs-discuss] ZFS/iSCSI target integration

2006-11-09 Thread Adam Leventhal
Thanks for all the feedback. This PSARC case was approved yesterday and will be integrated relatively soon. Adam On Wed, Nov 01, 2006 at 01:33:33AM -0800, Adam Leventhal wrote: Rick McNeal and I have been working on building support for sharing ZVOLs as iSCSI targets directly into ZFS. Below

Re: [zfs-discuss] Re: df -e in ZFS

2006-11-09 Thread John Cui
Thanks for Anton, Robert and Mark's replies. Your answer verified my observation, ;-) . The reason that I want to use up the inode is we need to test the behaviors in the case of both block and inode are used up. If only fill up the block, creating an empty file still succeeds. Thanks,

Re: [zfs-discuss] I/O patterns during a zpool replace: why writetothe disk being replaced?

2006-11-09 Thread Erblichs
Bill Sommerfield, Because, first, I have seen alot of I/O occur while a snapshot is being aged out of a system. I don't think that during the resilvering process accesses (read, writes) are completely stopped to the orig_dev. I expect at

Re: [zfs-discuss] I/O patterns during a zpool replace: why writetothe disk being replaced?

2006-11-09 Thread Bill Sommerfeld
On Thu, 2006-11-09 at 19:18 -0800, Erblichs wrote: Bill Sommerfield, Again, that's not how my name is spelled. With some normal sporadic read failure, accessing the whole spool may force repeated reads for the replace. please look again at the iostat I posted:

Re: [zfs-discuss] zfs mount stuck in zil_replay

2006-11-09 Thread Neil Perrin
Hi Robert, Yes, it could be related, or even the bug. Certainly the replay was (prior to this bug fix) extremely slow. I don't really have enough information to determine if it's the exact problem, though after re-reading your original post I strongly suspect it is. I also putback a companion

Re: [zfs-discuss] I/O patterns during a zpool replace: whywritetothe disk being replaced?

2006-11-09 Thread Erblichs
Bill, Sommerfeld, Sorry, However, I am trying to explain what I think is happening on your system and why I consider this normal. Most of the reads/FS replace are normally at the block level. To copy a FS, some level of reading MUST be done