On Sun, Jun 12, 2011 at 3:54 AM, Johan Eliasson
johan.eliasson.j...@gmail.com wrote:
I replaced a smaller disk in my tank2, so now they're all 2TB. But look,
zfs still thinks it's a pool of 1.5 TB disks:
nebol@filez:~# zpool list tank2
NAMESIZE ALLOC FREECAP DEDUP HEALTH
Indeed it was!
Thanks!!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Jun 11, 2011 at 08:26:34PM +0400, Jim Klimov wrote:
2011-06-11 19:15, Pasi Kärkkäinen ??:
On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote:
I've had two incidents where performance tanked suddenly, leaving the VM
guests and Nexenta SSH/Web consoles
Did you try a read-only import as well? I THINK it goes like this:
zpool import -o ro -o cachefile=none -F -f badpool
Did you manage to capture any error output? For example, is it an option for
you to set up a serial console and copy-paste the error text from the serial
terminal on another
On May 10, 2011, at 9:18 AM, Ray Van Dolson wrote:
We recently had a disk fail on one of our whitebox (SuperMicro) ZFS
arrays (Solaris 10 U9).
The disk began throwing errors like this:
May 5 04:33:44 dev-zfs4 scsi: [ID 243001 kern.warning] WARNING:
/pci@0,0/pci8086,3410@9/pci15d9,400@0
On Jun 11, 2011, at 5:46 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
See FEC suggestion from another poster ;)
Well, of course, all storage mediums have built-in hardware FEC. At least
disk
On Jun 11, 2011, at 6:35 AM, Edmund White wrote:
Posted in greater detail at Server Fault -
http://serverfault.com/q/277966/13325
Replied in greater detail at same.
I have an HP ProLiant DL380 G7 system running NexentaStor. The server has
36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS
On Jun 11, 2011, at 9:26 AM, Jim Klimov wrote:
2011-06-11 19:15, Pasi Kärkkäinen пишет:
On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote:
I've had two incidents where performance tanked suddenly, leaving the VM
guests and Nexenta SSH/Web consoles inaccessible and requiring a
Hi All,
I have an interesting question that may or may not be answerable from
some internal
ZFS semantics.
I have a Sun Messaging Server which has 5 ZFS based email stores. The
Sun Messaging server
uses hard links to link identical messages together. Messages are stored
in standard SMTP
On Sun, Jun 12, 2011 at 4:14 PM, Scott Lawson
scott.law...@manukau.ac.nz wrote:
I have an interesting question that may or may not be answerable from some
internal
ZFS semantics.
This is really standard Unix filesystem semantics.
[...]
So total storage used is around ~7.5MB due to the hard
2011-06-12 23:57, Richard Elling wrote:
How long should it wait? Before you answer, read through the thread:
http://lists.illumos.org/pipermail/developer/2011-April/001996.html
Then add your comments :-)
-- richard
Interesting thread. I did not quite get the resentment against
a
On Jun 12, 2011, at 4:18 PM, Jim Klimov wrote:
2011-06-12 23:57, Richard Elling wrote:
How long should it wait? Before you answer, read through the thread:
http://lists.illumos.org/pipermail/developer/2011-April/001996.html
Then add your comments :-)
-- richard
Interesting thread.
Some time ago I wrote a script to find any duplicate files and replace
them with hardlinks to one inode. Apparently this is only good for same
files which don't change separately in future, such as distro archives.
I can send it to you offlist, but it would be slow in your case because it
is not
2011-06-13 2:28, Nico Williams пишет:
PS: Is it really the case that Exchange still doesn't deduplicate
e-mails? Really? It's much simpler to implement dedup in a mail
store than in a filesystem...
That's especially strange, because NTFS has hardlinks and softlinks...
Not that Microsoft
On 13/06/11 10:28 AM, Nico Williams wrote:
On Sun, Jun 12, 2011 at 4:14 PM, Scott Lawson
scott.law...@manukau.ac.nz wrote:
I have an interesting question that may or may not be answerable from some
internal
ZFS semantics.
This is really standard Unix filesystem semantics.
I
On 6/12/11 6:18 PM, Jim Klimov jimkli...@cos.ru wrote:
2011-06-12 23:57, Richard Elling wrote:
How long should it wait? Before you answer, read through the thread:
http://lists.illumos.org/pipermail/developer/2011-April/001996.html
Then add your comments :-)
-- richard
But the point
On 13/06/11 11:36 AM, Jim Klimov wrote:
Some time ago I wrote a script to find any duplicate files and replace
them with hardlinks to one inode. Apparently this is only good for same
files which don't change separately in future, such as distro archives.
I can send it to you offlist, but it
On Jun 12, 2011, at 5:04 PM, Edmund White wrote:
On 6/12/11 6:18 PM, Jim Klimov jimkli...@cos.ru wrote:
2011-06-12 23:57, Richard Elling wrote:
How long should it wait? Before you answer, read through the thread:
http://lists.illumos.org/pipermail/developer/2011-April/001996.html
Then
On 6/12/11 7:25 PM, Richard Elling richard.ell...@gmail.com wrote:
Here's the timeline:
- The Intel X25-M was marked FAULTED Monday evening, 6pm. This was not
detected by NexentaStor.
Is the volume-check runner enabled? All of the check runner results are
logged in
the report database
On Sun, Jun 12, 2011 at 5:28 PM, Nico Williams n...@cryptonector.comwrote:
On Sun, Jun 12, 2011 at 4:14 PM, Scott Lawson
scott.law...@manukau.ac.nz wrote:
I have an interesting question that may or may not be answerable from
some
internal
ZFS semantics.
This is really standard Unix
20 matches
Mail list logo