[zfs-discuss] Bad pool...

2011-05-24 Thread Roy Sigurd Karlsbakk
Hi all

I have a rather large pool that has been a bit troublesome. We've lost some 
drives (WD Black), and though that should work out well, I now have a pool that 
doesn't look too healthy.

http://paste.ubuntu.com/611973/

Two drives have been resilvered, but the old drives still stick. The drive that 
has died still hasn't been taken over by a spare, although the two spares show 
up as AVAIL.

Anyone that know how I can fix this?

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Monitoring disk seeks

2011-05-24 Thread a . smith

Hi,

  see the seeksize script on this URL:

http://prefetch.net/articles/solaris.dtracetopten.html

Not used it but looks neat!

cheers Andy.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Monitoring disk seeks

2011-05-24 Thread Sašo Kiselkov
On 05/24/2011 03:08 PM, a.sm...@ukgrid.net wrote:
 Hi,
 
   see the seeksize script on this URL:
 
 http://prefetch.net/articles/solaris.dtracetopten.html
 
 Not used it but looks neat!
 
 cheers Andy.

I already did and it does the job just fine. Thank you for your kind
suggestion.

BR,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ndmp?

2011-05-24 Thread Edward Ned Harvey
When I search around, I see that nexenta has ndmp, and solaris 10 does not,
and there was at least some talk about supporting ndmp in opensolaris ...
So ...

 

Is ndmp present in solaris 11 express?  Is it an installable 3rd party
package?  How would you go about supporting ndmp if you wanted to?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ndmp?

2011-05-24 Thread Darren J Moffat

On 05/24/11 14:37, Edward Ned Harvey wrote:

When I search around, I see that nexenta has ndmp, and solaris 10 does
not, and there was at least some talk about supporting ndmp in
opensolaris ... So ...

Is ndmp present in solaris 11 express? Is it an installable 3rd party
package? How would you go about supporting ndmp if you wanted to?


It is present, it is not 3rd party.

Click here to install it:

http://pkg.oracle.com/solaris/release/p5i/0/service%2Fstorage%2Fndmp.p5i

Man pages are here:

http://download.oracle.com/docs/cd/E19963-01/html/821-1462/ndmpadm-1m.html
http://download.oracle.com/docs/cd/E19963-01/html/821-1462/ndmpd-1m.html
http://download.oracle.com/docs/cd/E19963-01/html/821-1462/ndmpstat-1m.html

What you mean by supporting it ?

I believe (though I haven't tested it) it works with Oracle Secure 
Backup as well as NetBackup and Networker.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ndmp?

2011-05-24 Thread Hung-ShengTsao (Lao Tsao) Ph.D.



On 5/24/2011 9:37 AM, Edward Ned Harvey wrote:


When I search around, I see that nexenta has ndmp, and solaris 10 does 
not, and there was at least some talk about supporting ndmp in 
opensolaris ...  So ...


Is ndmp present in solaris 11 express?  Is it an installable 3rd party 
package?  How would you go about supporting ndmp if you wanted to?



you can pay/buy  support for s11x/solaris from oracle for non-oracle HW
http://www.oracle.com/us/products/servers-storage/solaris/non-sun-x86-081976.html



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Orvar Korvar
The netapp lawsuit is solved. No conflicts there.

Regarding ZFS, it is open under CDDL license. The leaked source code that is 
already open is open. Nexenta is using the open sourced version of ZFS. Oracle 
might close future ZFS versions, but Nexenta's ZFS is open and can not be 
closed.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Hans Rattink
I have a more generall question about intellectual rights around ZFS, when 
taking a look at the storage solution NexentaStor.

Perhaps not necessary to mention, but to be complete: NexentaStor has created a 
Open Source SAN solution that runs on commodity hardware. Compellent for 
example has a NAS based upon Nexenta. NexentaStor is based upon the ZFS 
filesystem and sounds (for that reason) very promising. Now i wonder what the 
threats are to this and if Oracle is one of them, when reading for example in a 
Gartner report:

Gartner cautions about the uncertain nature of future developments of the 
open-source ZFS code, as Oracle intends to focus on monetizing ZFS. *

And on the Register i read:

One outcome is that Oracle agrees to license the relevant patents pertaining 
to ZFS from NetApp. This would then open the way for Coraid and other ZFS-using 
storage suppliers to have to license them as well, significantly upsetting 
their business models unless the license fees are set low. **

I would like to know what grip Oracle (or perhaps NetApp) has upon ZFS. Are 
parts of the code owned by Oracle? Can they put claims on parts of ZFS?

Regards, Hans.

*  
http://www.gartner.com/technology/media-products/reprints/hitachi/vol3/article2/article2.html?WT.ac=us_hp_sp1r21_p=v
** http://www.theregister.co.uk/2010/07/06/netapp_coraid/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bad pool...

2011-05-24 Thread Donald Stahl
 Two drives have been resilvered, but the old drives still stick. The drive 
 that has died still hasn't been taken over by a spare, although the two 
 spares show up as AVAIL.
For the one that hasn't been replaced try doing:
zpool replace dbpool c8t24d0 c4t43d0

For the two that have already been replaced you can try:
zpool detach dbpool c4t1d0/old
zpool detach dbpool c4t6d0/old

If that doesn't work then you need the disk ID from the old disks and
use that in the detach command instead of the c4t1d0 id.

-Don
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Same device node appearing twice in same mirror; one faulted, one not...

2011-05-24 Thread Cindy Swearingen

Hi Alex,

If the hardware and cables were moved around then this is probably
the root cause of your problem. You should see if you can move the
devices/cabling back to what they were before the move.

The zpool history output provides the original device name, which
isn't c5t1d0, either:

# zpool create tank c13t0d0

You might grep the zpool history output to find out which disk was
eventually attached, like this:

# zpool history | grep attach

But its clear from the zdb -l output, that the devid for this
particular device changed, which we've seen happen on some hardware. If
the devid persists, ZFS can follow the devid of the device even if its
physical path changes and is able to recover more gracefully.

If you continue to use this hardware for your storage pool, you should
export the pool before making any kind of hardware change.

Thanks,

Cindy


On 05/21/11 18:05, Alex Dolski wrote:

Hi Cindy,

Thanks for the advice. This is just a little old Gateway PC provisioned as an 
informal workgroup server. The main storage is two SATA drives in an external 
enclosure, connected to a Sil3132 PCIe eSATA controller. The OS is snv_134b, 
upgraded from snv_111a.

I can't identify a cause in particular. The box has been running for several 
months without much oversight. It's possible that the two eSATA cables got 
reconnected to different ports after a recent move.

The backup has been made and I will try the export  import, per your advice 
(if zpool command works - it does again at the moment, no reboot!). I will also try 
switching the eSATA cables to opposite ports.

Thanks,
Alex


Command output follows:

# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
   0. c5t1d0 ATA-WDC WD5000AAKS-0-1D05-465.76GB
  /pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0
   1. c8d0 DEFAULT cyl 9726 alt 2 hd 255 sec 63
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c9d0 DEFAULT cyl 38910 alt 2 hd 255 sec 63
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
   3. c11t0d0 WD-Ext HDD 1021-2002-931.51GB
  /pci@0,0/pci107b,5058@1a,7/storage@1/disk@0,0


# zpool history tank
History for 'tank':
2010-06-18.15:14:16 zpool create tank c13t0d0
2011-05-07.02:00:07 zpool scrub tank
2011-05-14.02:00:08 zpool scrub tank
2011-05-21.02:00:12 zpool scrub tank
a million 'zfs snapshot' and 'zfs destroy' events from zfs-auto-snap omitted


# zdb -l /dev/dsk/c5t1d0s0

LABEL 0

version: 14
name: 'tank'
state: 0
txg: 3374337
pool_guid: 6242690959503408617
hostid: 8697169
hostname: 'wdssandbox'
top_guid: 17982590661103377266
guid: 1717308203478351258
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 17982590661103377266
whole_disk: 0
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 500094468096
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 1717308203478351258
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1939879/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 27
children[1]:
type: 'disk'
id: 1
guid: 9267693216478869057
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1769949/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 893

LABEL 1

version: 14
name: 'tank'
state: 0
txg: 3374337
pool_guid: 6242690959503408617
hostid: 8697169
hostname: 'wdssandbox'
top_guid: 17982590661103377266
guid: 1717308203478351258
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 17982590661103377266
whole_disk: 0
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 500094468096
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 1717308203478351258
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1939879/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 27
children[1]:
type: 'disk'
id: 1
guid: 9267693216478869057
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1769949/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 893

LABEL 2


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Erik Trimble

On 5/24/2011 8:28 AM, Orvar Korvar wrote:

The netapp lawsuit is solved. No conflicts there.

Regarding ZFS, it is open under CDDL license. The leaked source code that is 
already open is open. Nexenta is using the open sourced version of ZFS. Oracle 
might close future ZFS versions, but Nexenta's ZFS is open and can not be 
closed.


There is no threat to Nexenta from the ZFS code itself; the license that 
it was made available under explicitly has Oracle allow use for any 
patents *Oracle* might have.


However, since the terms of the NetApp/Oracle suit aren't available 
publicly, and I seriously doubt that NetApp gave up its patent claims,  
it could still be feasible for NetApp to sue Nexenta or whomever for 
alleged violations of *NetApp's* patents in the ZFS code.


That is, ZFS has no copyright infringement issues for 3rd parties. It 
has no patent issues from Oracle.  It *could* have patent issues from 
NetApp.


The possible impact of that is beyond my knowledge. IANAL. Nor do I 
speak for Oracle in any manner, official or unofficial.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool lost, and no pools available?

2011-05-24 Thread Roy Sigurd Karlsbakk
Hi all

I just attended this HTC conference and had a chat with a guy from UiO 
(university of oslo) about ZFS. He claimed Solaris/OI will die silently if a 
single pool fails. I have seen similar earlier, then due to a bug in ZFS (two 
drives lost in a RAIDz2, spares taking over, resilvering and then a third drive 
lost), and the system is hanging. Not even the rpool seems to be available.

Can someone please confirm this or tell me if there is a workaround available?

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Hans Rattink
Hi Erik and Kebabber,

Thanks for your answers. Do i summarize it right saying: the best conclusion 
would be then that Nexenta has it's own version of ZFS and has nothing to fear 
of Oracle other ZFS-developpers but that it's uncertain what NetApp might come 
up with as the details aren't published?

Still i wonder what Gartner means with Oracle monetizing on ZFS... Perhaps that 
the advantage of ZFS for others like Compellent (and with that, NexentaStor as 
well) might be become less in future if Oracle speeds up their implementation 
of it?

Regards, Hans
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Hung-ShengTsao (Lao Tsao) Ph.D.


IMHO, oracle would prefer customer go with ZFS appliance with added 
WebGUI and all the extra support like Analytics, L2ARc and ZIL with SSD  etc


On 5/24/2011 2:30 PM, Hans Rattink wrote:

Hi Erik and Kebabber,

Thanks for your answers. Do i summarize it right saying: the best conclusion 
would be then that Nexenta has it's own version of ZFS and has nothing to fear 
of Oracle other ZFS-developpers but that it's uncertain what NetApp might come 
up with as the details aren't published?

Still i wonder what Gartner means with Oracle monetizing on ZFS... Perhaps that 
the advantage of ZFS for others like Compellent (and with that, NexentaStor as 
well) might be become less in future if Oracle speeds up their implementation 
of it?

Regards, Hans
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [OpenIndiana-discuss] zpool lost, and no pools available?

2011-05-24 Thread Albert Lee
man zpool /failmode

-Albert

On Tue, May 24, 2011 at 1:20 PM, Roy Sigurd Karlsbakk r...@karlsbakk.net 
wrote:
 Hi all

 I just attended this HTC conference and had a chat with a guy from UiO 
 (university of oslo) about ZFS. He claimed Solaris/OI will die silently if a 
 single pool fails. I have seen similar earlier, then due to a bug in ZFS (two 
 drives lost in a RAIDz2, spares taking over, resilvering and then a third 
 drive lost), and the system is hanging. Not even the rpool seems to be 
 available.

 Can someone please confirm this or tell me if there is a workaround available?

 Vennlige hilsener / Best regards

 roy
 --
 Roy Sigurd Karlsbakk
 (+47) 97542685
 r...@karlsbakk.net
 http://blogg.karlsbakk.net/
 --
 I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det 
 er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
 idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
 relevante synonymer på norsk.

 ___
 OpenIndiana-discuss mailing list
 openindiana-disc...@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bad pool...

2011-05-24 Thread Roy Sigurd Karlsbakk
  Shouldn't ZFS detach these automatically? It has done so earlier...
 It is not supposed to- at least that I recall.

Earlier replaces have gone well. One thing is spares, which I can understand 
somewhat, but dead drives should definetely be tossed off when replaced

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Hans Rattink
 IMHO, oracle would prefer customer go with ZFS
 appliance with added 
 WebGUI and all the extra support like Analytics,
 L2ARc and ZIL with SSD  etc

Last week i've seen mirrored ZIL upon ZEUS SSD in a Boston NexentaStor solution.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Hung-ShengTsao (Lao Tsao) Ph.D.

yes
IMHO, oracle and nexenta are target different customer


On 5/24/2011 3:30 PM, Hans Rattink wrote:

IMHO, oracle would prefer customer go with ZFS
appliance with added
WebGUI and all the extra support like Analytics,
L2ARc and ZIL with SSD  etc

Last week i've seen mirrored ZIL upon ZEUS SSD in a Boston NexentaStor solution.
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Richard Elling
On May 24, 2011, at 11:30 AM, Hans Rattink wrote:

 Hi Erik and Kebabber,
 
 Thanks for your answers. Do i summarize it right saying: the best conclusion 
 would be then that Nexenta has it's own version of ZFS and has nothing to 
 fear of Oracle other ZFS-developpers but that it's uncertain what NetApp 
 might come up with as the details aren't published?
 
 Still i wonder what Gartner means with Oracle monetizing on ZFS...

Simply means that if you want ZFS from Oracle, you have to pay money.

 Perhaps that the advantage of ZFS for others like Compellent (and with that, 
 NexentaStor as well) might be become less in future if Oracle speeds up their 
 implementation of it?

There are many ZFS implementations, each evolving as the contributors desire.
Diversity and innovation is a good thing.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Brandon High
On Tue, May 24, 2011 at 12:41 PM, Richard Elling
richard.ell...@gmail.com wrote:
 There are many ZFS implementations, each evolving as the contributors desire.
 Diversity and innovation is a good thing.

... unless Oracle's zpool v30 is different than Nexenta's v30.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Hans Rattink
Thanks all, this cleared up some grey details for me!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Richard Elling
On May 24, 2011, at 12:49 PM, Brandon High wrote:

 On Tue, May 24, 2011 at 12:41 PM, Richard Elling
 richard.ell...@gmail.com wrote:
 There are many ZFS implementations, each evolving as the contributors desire.
 Diversity and innovation is a good thing.
 
 ... unless Oracle's zpool v30 is different than Nexenta's v30.

It is safe to say Nexenta is unlikely to ever have a pool version 30. We are 
moving forward
with the new versioning method that supercedes the (limited) numbered system of 
the past.

Of course, Oracle broke this first by not implementing version 21 in Solaris 10 
:-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Ian Collins

 On 05/25/11 07:49 AM, Brandon High wrote:

On Tue, May 24, 2011 at 12:41 PM, Richard Elling
richard.ell...@gmail.com  wrote:

There are many ZFS implementations, each evolving as the contributors desire.
Diversity and innovation is a good thing.

... unless Oracle's zpool v30 is different than Nexenta's v30.


That could be a disaster for everyone if they are incompatible.

Now with Oracle development in secret, I guess incompatible branches of 
ZFS are inevitable.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Hans Rattink
Hi Brandon,

Thanks for the details. Sounds to me like Nexenta is in the lead!

Kind regards,
Hans Rattink





2011/5/24 Richard Elling richard.ell...@gmail.com

 On May 24, 2011, at 12:49 PM, Brandon High wrote:

  On Tue, May 24, 2011 at 12:41 PM, Richard Elling
  richard.ell...@gmail.com wrote:
  There are many ZFS implementations, each evolving as the contributors
 desire.
  Diversity and innovation is a good thing.
 
  ... unless Oracle's zpool v30 is different than Nexenta's v30.

 It is safe to say Nexenta is unlikely to ever have a pool version 30. We
 are moving forward
 with the new versioning method that supercedes the (limited) numbered
 system of the past.

 Of course, Oracle broke this first by not implementing version 21 in
 Solaris 10 :-)
  -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread LaoTsao
Well
With various fock of opensource project
E.g. Zfs, opensolaris, openindina etc there are all different
There are not guarantee to be compatible 

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On May 24, 2011, at 4:40 PM, Ian Collins i...@ianshome.com wrote:

 On 05/25/11 07:49 AM, Brandon High wrote:
 On Tue, May 24, 2011 at 12:41 PM, Richard Elling
 richard.ell...@gmail.com  wrote:
 There are many ZFS implementations, each evolving as the contributors 
 desire.
 Diversity and innovation is a good thing.
 ... unless Oracle's zpool v30 is different than Nexenta's v30.
 
 That could be a disaster for everyone if they are incompatible.
 
 Now with Oracle development in secret, I guess incompatible branches of ZFS 
 are inevitable.
 
 -- 
 Ian.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Same device node appearing twice in same mirror; one faulted, one not...

2011-05-24 Thread Alex Dolski
Sure enough Cindy, the eSATA cables had been crossed. I exported, powered off, 
reversed the cables, booted, imported, and the pool is currently resilvering 
with both c5t0d0  c5t1d0 present in the mirror. :) Thank you!!

Alex



On May 24, 2011, at 9:58 AM, Cindy Swearingen wrote:

 Hi Alex,
 
 If the hardware and cables were moved around then this is probably
 the root cause of your problem. You should see if you can move the
 devices/cabling back to what they were before the move.
 
 The zpool history output provides the original device name, which
 isn't c5t1d0, either:
 
 # zpool create tank c13t0d0
 
 You might grep the zpool history output to find out which disk was
 eventually attached, like this:
 
 # zpool history | grep attach
 
 But its clear from the zdb -l output, that the devid for this
 particular device changed, which we've seen happen on some hardware. If
 the devid persists, ZFS can follow the devid of the device even if its
 physical path changes and is able to recover more gracefully.
 
 If you continue to use this hardware for your storage pool, you should
 export the pool before making any kind of hardware change.
 
 Thanks,
 
 Cindy
 
 
 On 05/21/11 18:05, Alex Dolski wrote:
 Hi Cindy,
 Thanks for the advice. This is just a little old Gateway PC provisioned as 
 an informal workgroup server. The main storage is two SATA drives in an 
 external enclosure, connected to a Sil3132 PCIe eSATA controller. The OS is 
 snv_134b, upgraded from snv_111a.
 I can't identify a cause in particular. The box has been running for several 
 months without much oversight. It's possible that the two eSATA cables got 
 reconnected to different ports after a recent move.
 The backup has been made and I will try the export  import, per your advice 
 (if zpool command works - it does again at the moment, no reboot!). I will 
 also try switching the eSATA cables to opposite ports.
 Thanks,
 Alex
 Command output follows:
 # format
 Searching for disks...done
 AVAILABLE DISK SELECTIONS:
   0. c5t1d0 ATA-WDC WD5000AAKS-0-1D05-465.76GB
  /pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0
   1. c8d0 DEFAULT cyl 9726 alt 2 hd 255 sec 63
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c9d0 DEFAULT cyl 38910 alt 2 hd 255 sec 63
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
   3. c11t0d0 WD-Ext HDD 1021-2002-931.51GB
  /pci@0,0/pci107b,5058@1a,7/storage@1/disk@0,0
 # zpool history tank
 History for 'tank':
 2010-06-18.15:14:16 zpool create tank c13t0d0
 2011-05-07.02:00:07 zpool scrub tank
 2011-05-14.02:00:08 zpool scrub tank
 2011-05-21.02:00:12 zpool scrub tank
 a million 'zfs snapshot' and 'zfs destroy' events from zfs-auto-snap 
 omitted
 # zdb -l /dev/dsk/c5t1d0s0
 
 LABEL 0
 
version: 14
name: 'tank'
state: 0
txg: 3374337
pool_guid: 6242690959503408617
hostid: 8697169
hostname: 'wdssandbox'
top_guid: 17982590661103377266
guid: 1717308203478351258
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 17982590661103377266
whole_disk: 0
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 500094468096
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 1717308203478351258
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1939879/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 27
children[1]:
type: 'disk'
id: 1
guid: 9267693216478869057
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1769949/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 893
 
 LABEL 1
 
version: 14
name: 'tank'
state: 0
txg: 3374337
pool_guid: 6242690959503408617
hostid: 8697169
hostname: 'wdssandbox'
top_guid: 17982590661103377266
guid: 1717308203478351258
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 17982590661103377266
whole_disk: 0
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 500094468096
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 1717308203478351258
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1939879/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 27
children[1]:
type: 'disk'
id: 1
guid: 9267693216478869057
   

Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Peter Jeremy
On 2011-May-25 03:49:43 +0800, Brandon High bh...@freaks.com wrote:
... unless Oracle's zpool v30 is different than Nexenta's v30.

This would be unfortunate but no worse than the current situation
with UFS - Solaris, *BSD and HP Tru64 all have native UFS filesystems,
all of which are incompatible.

I believe the various OSS projects that use ZFS have formed a working
group to co-ordinate ZFS amongst themselves.  I don't know if Oracle
was invited to join (though given the way Oracle has behaved in all
the other OSS working groups it was a member of, having Oracle onboard
might be a disadvantage).

-- 
Peter Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Brandon High
On Tue, May 24, 2011 at 3:17 PM, Peter Jeremy
peter.jer...@alcatel-lucent.com wrote:
 I believe the various OSS projects that use ZFS have formed a working
 group to co-ordinate ZFS amongst themselves.  I don't know if Oracle
 was invited to join (though given the way Oracle has behaved in all

Richard would probably know for certain.

There will probably be a fork at some point to an OSS ZFS and an
Oracle ZFS. Hopefully neither side will actively try to break
compatibility.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Richard Elling
On May 24, 2011, at 3:46 PM, Brandon High wrote:

 On Tue, May 24, 2011 at 3:17 PM, Peter Jeremy
 peter.jer...@alcatel-lucent.com wrote:
 I believe the various OSS projects that use ZFS have formed a working
 group to co-ordinate ZFS amongst themselves.  I don't know if Oracle
 was invited to join (though given the way Oracle has behaved in all
 
 Richard would probably know for certain.

Yes, Oracle has representation on the ZFS working group.

 There will probably be a fork at some point to an OSS ZFS and an
 Oracle ZFS.

That break occurred in August 2010.

 Hopefully neither side will actively try to break
 compatibility.

Yes! A solution to the versioning issue appears to have reached consensus.
I observe that the current Solaris 10/11 versioning incompatibility issue 
doesn't 
seem to be causing rioting in the streets :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bad pool...

2011-05-24 Thread Roy Sigurd Karlsbakk
  The system became non-responsible after two drives was lost, and
  replaced with spares, in that VDEV. That bug has been filed and
  acknowleged. Take a RAIDz2 with two spares and remove a drive from
  the pool, let it resilver to a spare, remove another, wait until it
  resilvers again, and remove the third. The system will become rather
  dead - even the rpool will be unavailable, even if both the data
  pool and the rpool are bothe theoretically healthy
 
 Can't say I've ever run into that situation. I'd suggest looking into
 the pool failmode setting but that still wouldn't make a lot of sense.
 Any idea why you are getting so many failures?

CC:ing this to the appropriate lists

As a first, the default is to let go of failed devices. I haven't tweaked that 
part, nor any part of the pool. If a drive failes, it should be replaced by a 
spare, and when a drive is replaced by a new one, the old ghost should 
disappear. Neither of this happens at times. It seems sometimes the zpool 
forgets a dead drive and let ihang. This may trigger the bug which turns a 
pool and indeed the system unusable (if two drives in a raidz2 are lost, but 
resilvererd, losing the third will hang the system).

The remedy seemed to be to zpool detach the drives. Still, the bug(s) exist(s) 
to allow a system to be rendered unusable just with a few drives lost, long 
before the pool is lost.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [OpenIndiana-discuss] zpool lost, and no pools available?

2011-05-24 Thread Roy Sigurd Karlsbakk
  I just attended this HTC conference and had a chat with a guy from
  UiO (university of oslo) about ZFS. He claimed Solaris/OI will die
  silently if a single pool fails. I have seen similar earlier, then
  due to a bug in ZFS (two drives lost in a RAIDz2, spares taking
  over, resilvering and then a third drive lost), and the system is
  hanging. Not even the rpool seems to be available.
 
  Can someone please confirm this or tell me if there is a workaround
  available?

 man zpool /failmode

You may want to RTFM yourself befor replying. The docs say standard procedure 
is to put the pool into wait, which is ok, but the problem is that not only the 
pool in question is put into wait, but all pools.

Please refrain from RTFMing people before digging into the material

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss