Re: [zfs-discuss] Q: pool didn't expand. why? can I force it?

2011-06-12 Thread Tim Cook
On Sun, Jun 12, 2011 at 3:54 AM, Johan Eliasson johan.eliasson.j...@gmail.com wrote: I replaced a smaller disk in my tank2, so now they're all 2TB. But look, zfs still thinks it's a pool of 1.5 TB disks: nebol@filez:~# zpool list tank2 NAMESIZE ALLOC FREECAP DEDUP HEALTH

Re: [zfs-discuss] Q: pool didn't expand. why? can I force it?

2011-06-12 Thread Johan Eliasson
Indeed it was! Thanks!! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-12 Thread Pasi Kärkkäinen
On Sat, Jun 11, 2011 at 08:26:34PM +0400, Jim Klimov wrote: 2011-06-11 19:15, Pasi Kärkkäinen ??: On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote: I've had two incidents where performance tanked suddenly, leaving the VM guests and Nexenta SSH/Web consoles

Re: [zfs-discuss] zpool import crashs SX11 trying to recovering a corrupted zpool

2011-06-12 Thread Jim Klimov
Did you try a read-only import as well? I THINK it goes like this: zpool import -o ro -o cachefile=none -F -f badpool Did you manage to capture any error output? For example, is it an option for you to set up a serial console and copy-paste the error text from the serial terminal on another

Re: [zfs-discuss] Tuning disk failure detection?

2011-06-12 Thread Richard Elling
On May 10, 2011, at 9:18 AM, Ray Van Dolson wrote: We recently had a disk fail on one of our whitebox (SuperMicro) ZFS arrays (Solaris 10 U9). The disk began throwing errors like this: May 5 04:33:44 dev-zfs4 scsi: [ID 243001 kern.warning] WARNING: /pci@0,0/pci8086,3410@9/pci15d9,400@0

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-12 Thread Richard Elling
On Jun 11, 2011, at 5:46 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov See FEC suggestion from another poster ;) Well, of course, all storage mediums have built-in hardware FEC. At least disk

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-12 Thread Richard Elling
On Jun 11, 2011, at 6:35 AM, Edmund White wrote: Posted in greater detail at Server Fault - http://serverfault.com/q/277966/13325 Replied in greater detail at same. I have an HP ProLiant DL380 G7 system running NexentaStor. The server has 36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-12 Thread Richard Elling
On Jun 11, 2011, at 9:26 AM, Jim Klimov wrote: 2011-06-11 19:15, Pasi Kärkkäinen пишет: On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote: I've had two incidents where performance tanked suddenly, leaving the VM guests and Nexenta SSH/Web consoles inaccessible and requiring a

[zfs-discuss] ZFS Hard link space savings

2011-06-12 Thread Scott Lawson
Hi All, I have an interesting question that may or may not be answerable from some internal ZFS semantics. I have a Sun Messaging Server which has 5 ZFS based email stores. The Sun Messaging server uses hard links to link identical messages together. Messages are stored in standard SMTP

Re: [zfs-discuss] ZFS Hard link space savings

2011-06-12 Thread Nico Williams
On Sun, Jun 12, 2011 at 4:14 PM, Scott Lawson scott.law...@manukau.ac.nz wrote: I have an interesting question that may or may not be answerable from some internal ZFS semantics. This is really standard Unix filesystem semantics. [...] So total storage used is around ~7.5MB due to the hard

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-12 Thread Jim Klimov
2011-06-12 23:57, Richard Elling wrote: How long should it wait? Before you answer, read through the thread: http://lists.illumos.org/pipermail/developer/2011-April/001996.html Then add your comments :-) -- richard Interesting thread. I did not quite get the resentment against a

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-12 Thread Richard Elling
On Jun 12, 2011, at 4:18 PM, Jim Klimov wrote: 2011-06-12 23:57, Richard Elling wrote: How long should it wait? Before you answer, read through the thread: http://lists.illumos.org/pipermail/developer/2011-April/001996.html Then add your comments :-) -- richard Interesting thread.

Re: [zfs-discuss] ZFS Hard link space savings

2011-06-12 Thread Jim Klimov
Some time ago I wrote a script to find any duplicate files and replace them with hardlinks to one inode. Apparently this is only good for same files which don't change separately in future, such as distro archives. I can send it to you offlist, but it would be slow in your case because it is not

Re: [zfs-discuss] ZFS Hard link space savings

2011-06-12 Thread Jim Klimov
2011-06-13 2:28, Nico Williams пишет: PS: Is it really the case that Exchange still doesn't deduplicate e-mails? Really? It's much simpler to implement dedup in a mail store than in a filesystem... That's especially strange, because NTFS has hardlinks and softlinks... Not that Microsoft

Re: [zfs-discuss] ZFS Hard link space savings

2011-06-12 Thread Scott Lawson
On 13/06/11 10:28 AM, Nico Williams wrote: On Sun, Jun 12, 2011 at 4:14 PM, Scott Lawson scott.law...@manukau.ac.nz wrote: I have an interesting question that may or may not be answerable from some internal ZFS semantics. This is really standard Unix filesystem semantics. I

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-12 Thread Edmund White
On 6/12/11 6:18 PM, Jim Klimov jimkli...@cos.ru wrote: 2011-06-12 23:57, Richard Elling wrote: How long should it wait? Before you answer, read through the thread: http://lists.illumos.org/pipermail/developer/2011-April/001996.html Then add your comments :-) -- richard But the point

Re: [zfs-discuss] ZFS Hard link space savings

2011-06-12 Thread Scott Lawson
On 13/06/11 11:36 AM, Jim Klimov wrote: Some time ago I wrote a script to find any duplicate files and replace them with hardlinks to one inode. Apparently this is only good for same files which don't change separately in future, such as distro archives. I can send it to you offlist, but it

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-12 Thread Richard Elling
On Jun 12, 2011, at 5:04 PM, Edmund White wrote: On 6/12/11 6:18 PM, Jim Klimov jimkli...@cos.ru wrote: 2011-06-12 23:57, Richard Elling wrote: How long should it wait? Before you answer, read through the thread: http://lists.illumos.org/pipermail/developer/2011-April/001996.html Then

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-12 Thread Edmund White
On 6/12/11 7:25 PM, Richard Elling richard.ell...@gmail.com wrote: Here's the timeline: - The Intel X25-M was marked FAULTED Monday evening, 6pm. This was not detected by NexentaStor. Is the volume-check runner enabled? All of the check runner results are logged in the report database

Re: [zfs-discuss] ZFS Hard link space savings

2011-06-12 Thread Tim Cook
On Sun, Jun 12, 2011 at 5:28 PM, Nico Williams n...@cryptonector.comwrote: On Sun, Jun 12, 2011 at 4:14 PM, Scott Lawson scott.law...@manukau.ac.nz wrote: I have an interesting question that may or may not be answerable from some internal ZFS semantics. This is really standard Unix