[zfs-discuss] zfs problem vdev I/O failure

2011-04-25 Thread Konstantin Kuklin
Good morning, I have a problem with ZFS: ZFS filesystem version 4 ZFS storage pool version 15 Yesterday my comp with Freebsd 8.2 releng shutdown with ad4 error detached,when I copy a big file... and after reboot in 2 wd green 1tb say me goodbye. One of them die and other with zfs errors: Apr

Re: [zfs-discuss] zfs problem vdev I/O failure

2011-04-25 Thread Pawel Tyll
Hi Konstantin, zpool status: Flash# zpool status pool: zroot state: DEGRADED status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see: http://www.sun.com/msg/ZFS-8000-HC scrub: resilver in

[zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Edward Ned Harvey
There are a lot of conflicting references on the Internet, so I'd really like to solicit actual experts (ZFS developers or people who have physical evidence) to weigh in on this... After searching around, the reference I found to be the most seemingly useful was Erik's post here:

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Roy Sigurd Karlsbakk
After modifications that I hope are corrections, I think the post should look like this: The rule-of-thumb is 270 bytes/DDT entry, and 200 bytes of ARC for every L2ARC entry. DDT doesn't count for this ARC space usage E.g.: I have 1TB of 4k blocks that are to be deduped, and it turns

Re: [zfs-discuss] zfs problem vdev I/O failure

2011-04-25 Thread Konstantin Kuklin
So, I install FreeBSD 8.2 with ZFS patch v28 and have this error message with full freeze zfs system: Solaris: Warning: can`t open object for zroot/var/crash log_sysevent: type19 is not emplement log_sysevent: type19 is not emplement log_sysevent: type19 is not emplement log_sysevent: type19 is

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Erik Trimble
On 4/25/2011 8:20 AM, Edward Ned Harvey wrote: There are a lot of conflicting references on the Internet, so I'd really like to solicit actual experts (ZFS developers or people who have physical evidence) to weigh in on this... After searching around, the reference I found to be the most

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Neil Perrin
On 04/25/11 11:55, Erik Trimble wrote: On 4/25/2011 8:20 AM, Edward Ned Harvey wrote: And one more comment: Based on what's below, it seems that the DDT gets stored on the cache device and also in RAM. Is that correct? What if you didn't have a cache device? Shouldn't it *always* be in

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Freddie Cash
On Mon, Apr 25, 2011 at 10:55 AM, Erik Trimble erik.trim...@oracle.com wrote: Min block size is 512 bytes. Technically, isn't the minimum block size 2^(ashift value)? Thus, on 4 KB disks where the vdevs have an ashift=12, the minimum block size will be 4 KB. -- Freddie Cash fjwc...@gmail.com

[zfs-discuss] Drive replacement speed

2011-04-25 Thread Brandon High
I'm in the process of replacing drive in a pool, and the resilver times seem to have increased with each device. The way that I'm doing this is by pulling a drive, physically replacing it, then doing 'cfgadm -c configure ; zpool replace tank '. I don't have any hot-swap bays available, so

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Brandon High
On Mon, Apr 25, 2011 at 8:20 AM, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: and 128k assuming default recordsize.  (BTW, recordsize seems to be a zfs property, not a zpool property.  So how can you know or configure the blocksize for something like a zvol iscsi

[zfs-discuss] arcstat updates

2011-04-25 Thread Richard Elling
Hi ZFSers, I've been working on merging the Joyent arcstat enhancements with some of my own and am now to the point where it is time to broaden the requirements gathering. The result is to be merged into the illumos tree. arcstat is a perl script to show the value of ARC kstats as they change

Re: [zfs-discuss] Drive replacement speed

2011-04-25 Thread Richard Elling
On Apr 25, 2011, at 2:52 PM, Brandon High wrote: I'm in the process of replacing drive in a pool, and the resilver times seem to have increased with each device. The way that I'm doing this is by pulling a drive, physically replacing it, then doing 'cfgadm -c configure ; zpool replace

[zfs-discuss] How does ZFS dedup space accounting work with quota?

2011-04-25 Thread Fred Liu
Cindy, Following is quoted from ZFS Dedup FAQ: Deduplicated space accounting is reported at the pool level. You must use the zpool list command rather than the zfs list command to identify disk space consumption when dedup is enabled. If you use the zfs list command to review deduplicated

[zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-25 Thread Lamp Zy
Hi, One of my drives failed in Raidz2 with two hot spares: # zpool status pool: fwgpool0 state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and

Re: [zfs-discuss] Drive replacement speed

2011-04-25 Thread Brandon High
On Mon, Apr 25, 2011 at 4:45 PM, Richard Elling richard.ell...@gmail.com wrote: If there is other work going on, then you might be hitting the resilver throttle. By default, it will delay 2 clock ticks, if needed. It can be turned There is some other access to the pool from nfs and cifs

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-25 Thread Brandon High
On Mon, Apr 25, 2011 at 4:56 PM, Lamp Zy lam...@gmail.com wrote: I'd expect the spare drives to auto-replace the failed one but this is not happening. What am I missing? Is the autoreplace property set to 'on'? # zpool get autoreplace fwgpool0 # zpool set autoreplace=on fwgpool0 I really

Re: [zfs-discuss] How does ZFS dedup space accounting work with quota?

2011-04-25 Thread Brandon High
On Mon, Apr 25, 2011 at 4:53 PM, Fred Liu fred_...@issi.com wrote: So how can I set the quota size on a file system with dedup enabled? I believe the quota applies to the non-dedup'd data size. If a user stores 10G of data, it will use 10G of quota, regardless of whether it dedups at 100:1 or

Re: [zfs-discuss] How does ZFS dedup space accounting work with quota?

2011-04-25 Thread Fred Liu
H, it seems dedup is pool-based not filesystem-based. If it can have fine-grained granularity(like based on fs), that will be great! It is pity! NetApp is sweet in this aspect. Thanks. Fred -Original Message- From: Brandon High [mailto:bh...@freaks.com] Sent: 星期二, 四月 26, 2011

Re: [zfs-discuss] How does ZFS dedup space accounting work with quota?

2011-04-25 Thread Ian Collins
On 04/26/11 01:13 PM, Fred Liu wrote: H, it seems dedup is pool-based not filesystem-based. That's correct. Although it can be turned off and on at the filesystem level (assuming it is enabled for the pool). If it can have fine-grained granularity(like based on fs), that will be great!

Re: [zfs-discuss] Drive replacement speed

2011-04-25 Thread Brandon High
On Mon, Apr 25, 2011 at 5:26 PM, Brandon High bh...@freaks.com wrote: Setting zfs_resilver_delay seems to have helped some, based on the iostat output. Are there other tunables? I found zfs_resilver_min_time_ms while looking. I've tried bumping it up considerably, without much change. 'zpool

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-25 Thread Lamp Zy
Thanks Brandon, On 04/25/2011 05:47 PM, Brandon High wrote: On Mon, Apr 25, 2011 at 4:56 PM, Lamp Zylam...@gmail.com wrote: I'd expect the spare drives to auto-replace the failed one but this is not happening. What am I missing? Is the autoreplace property set to 'on'? # zpool get

Re: [zfs-discuss] How does ZFS dedup space accounting work with quota?

2011-04-25 Thread Erik Trimble
On 4/25/2011 6:23 PM, Ian Collins wrote: On 04/26/11 01:13 PM, Fred Liu wrote: H, it seems dedup is pool-based not filesystem-based. That's correct. Although it can be turned off and on at the filesystem level (assuming it is enabled for the pool). Which is effectively the same as