Re: [zfs-discuss] Data loss bug - sidelined??

2009-05-01 Thread Roch Bourbonnais
Le 6 févr. 09 à 20:54, Ross Smith a écrit : Something to do with cache was my first thought. It seems to be able to read and write from the cache quite happily for some time, regardless of whether the pool is live. If you're reading or writing large amounts of data, zfs starts experiencing

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Ross
Ok, I noticed somebody's flagged the bug as 'retest', I don't know whether that's aimed at Sun or myself, but either way I'm installing snv_106 on a test machine now and will check whether this is still an issue. -- This message posted from opensolaris.org

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Ross
Ok, it's still happening in snv_106: I plugged a USB drive into a freshly installed system, and created a single disk zpool on it: # zpool create usbtest c1t0d0 I opened the (nautilus?) file manager in gnome, and copied the /etc/X11 folder to it. I then copied the /etc/apache folder to it,

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Richard Elling
Ross, this is a pretty good description of what I would expect when failmode=continue. What happens when failmode=panic? -- richard Ross wrote: Ok, it's still happening in snv_106: I plugged a USB drive into a freshly installed system, and created a single disk zpool on it: # zpool create

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Ross Smith
I can check on Monday, but the system will probably panic... which doesn't really help :-) Am I right in thinking failmode=wait is still the default? If so, that should be how it's set as this testing was done on a clean install of snv_106. From what I've seen, I don't think this is a problem

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Brent Jones
On Fri, Feb 6, 2009 at 10:50 AM, Ross Smith myxi...@googlemail.com wrote: I can check on Monday, but the system will probably panic... which doesn't really help :-) Am I right in thinking failmode=wait is still the default? If so, that should be how it's set as this testing was done on a

Re: [zfs-discuss] Data loss bug - sidelined??

2009-02-06 Thread Ross Smith
Something to do with cache was my first thought. It seems to be able to read and write from the cache quite happily for some time, regardless of whether the pool is live. If you're reading or writing large amounts of data, zfs starts experiencing IO faults and offlines the pool pretty quickly.

[zfs-discuss] Data loss bug - sidelined??

2009-02-04 Thread Ross
In August last year I posted this bug, a brief summary of which would be that ZFS still accepts writes to a faulted pool, causing data loss, and potentially silent data loss: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6735932 There have been no updates to the bug since