Re: [zfs-discuss] 'zfs recv' is very slow

2009-01-23 Thread River Tarnell
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Brent Jones:
 My results are much improved, on the order of 5-100 times faster
 (either over Mbuffer or SSH). 

this is good news - although not quite soon enough for my current 5TB zfs send
;-)

have you tested if this also improves the performance of incremental sends?

- river.
-BEGIN PGP SIGNATURE-

iD4DBQFJeZ8GIXd7fCuc5vIRAjPeAJ9Ed9AdwcTWdqkAizVqIPp1qUyNtACY8DA6
a9zguVE8f/TZ6pH/Haa4/Q==
=vf9T
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is scrubbing safe in 101b? (OpenSolaris 2008.11)

2009-01-23 Thread David Dyer-Bennet
I thought I'd noticed that my crashes tended to occur when I was running a
scrub, and saw at least one open bug that was scrub-related that could
cause such a crash.  However, I eventually tracked my problem down (as it
got worse) to a bad piece of memory (been nearly a week since I replaced
the memory, and no more problems).

Which leaves me wondering, how safe is running a scrub?  Scrub is one of
the things that made ZFS so attractive to me, and my automatic reaction
when I first hook up the data disks during a recovery is run a scrub!.
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is scrubbing safe in 101b? (OpenSolaris 2008.11)

2009-01-23 Thread Casper . Dik

I thought I'd noticed that my crashes tended to occur when I was running a
scrub, and saw at least one open bug that was scrub-related that could
cause such a crash.  However, I eventually tracked my problem down (as it
got worse) to a bad piece of memory (been nearly a week since I replaced
the memory, and no more problems).

I had a problem and I think that it was a bad motherboard; it too panic'ed
during scrub and it even said that scrub finished (it went from 50% to
finished, immediately).

I replaced the system (motherboard, harddisk) and I re-ran scrub; no
problem with scrub that time but it took the amount it should have taken.

Which leaves me wondering, how safe is running a scrub?  Scrub is one of
the things that made ZFS so attractive to me, and my automatic reaction
when I first hook up the data disks during a recovery is run a scrub!.


If your memory is bad, anything can happen.  A scrub can rewrite bad
data; but it can be the case that the disk is fine but the memory is
bad.  Then, if the data is replicated it can be copied and rewritten;
it is then possible to write incorrect data (and if they need to recompute 
the checksum, then oops)

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-23 Thread Bob Friesenhahn
On Thu, 22 Jan 2009, Ross wrote:

 However, now I've written that, Sun use SATA (SAS?) SSD's in their 
 high end fishworks storage, so I guess it definately works for some 
 use cases.

But the fishworks (Fishworks is a development team, not a product) 
write cache device is not based on FLASH.  It is based on DRAM.  The 
difference is like night and day. Apparently there can also be a read 
cache which is based on FLASH.

Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-23 Thread Ross Smith
That's my understanding too.  One (STEC?) drive as a write cache,
basically a write optimised SSD.  And cheaper, larger, read optimised
SSD's for the read cache.

I thought it was an odd strategy until I read into SSD's a little more
and realised you really do have to think about your usage cases with
these.  SSD's are very definitely not all alike.


On Fri, Jan 23, 2009 at 4:33 PM, Greg Mason gma...@msu.edu wrote:
 If i'm not mistaken (and somebody please correct me if i'm wrong), the Sun
 7000 series storage appliances (the Fishworks boxes) use enterprise SSDs,
 with dram caching. One such product is made by STEC.

 My understanding is that the Sun appliances use one SSD for the ZIL, and one
 as a read cache. For the 7210 (which is basically a Sun Fire X4540), that
 gives you 46 disks and 2 SSDs.

 -Greg


 Bob Friesenhahn wrote:

 On Thu, 22 Jan 2009, Ross wrote:

 However, now I've written that, Sun use SATA (SAS?) SSD's in their high
 end fishworks storage, so I guess it definately works for some use cases.

 But the fishworks (Fishworks is a development team, not a product) write
 cache device is not based on FLASH.  It is based on DRAM.  The difference is
 like night and day. Apparently there can also be a read cache which is based
 on FLASH.

 Bob
 ==
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is scrubbing safe in 101b? (OpenSolaris 2008.11)

2009-01-23 Thread David Dyer-Bennet

On Fri, January 23, 2009 09:52, casper@sun.com wrote:

Which leaves me wondering, how safe is running a scrub?  Scrub is one of
the things that made ZFS so attractive to me, and my automatic reaction
when I first hook up the data disks during a recovery is run a scrub!.


 If your memory is bad, anything can happen.  A scrub can rewrite bad
 data; but it can be the case that the disk is fine but the memory is
 bad.  Then, if the data is replicated it can be copied and rewritten;
 it is then possible to write incorrect data (and if they need to recompute
 the checksum, then oops)

The memory was ECC, so it *should* have mostly detected problems early
enough to avoid writing bad data.  And so far nothing has been detected as
bad in the pool during light use.  But I haven't yet run a scrub since
fixing the memory, so I have no idea what horrors may be lurking in wait.

The pool is two mirror vdevs, and then I have two backups on external hard
drives, and then I have two sets of optical disks of the photos, one of
them off-site (I'd lose several months of photos if I had to fall back to
the optical disks, I'm a bit behind there).  So I'm not yet in great fear
of actually losing anything, and have very little risk of actually losing
a LOT.

But what I'm wondering is, are there known bugs in 101b that make
scrubbing inadvisable with that code?  I'd love to *find out* what horrors
may be lurking.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-23 Thread Greg Mason
If i'm not mistaken (and somebody please correct me if i'm wrong), the 
Sun 7000 series storage appliances (the Fishworks boxes) use enterprise 
SSDs, with dram caching. One such product is made by STEC.

My understanding is that the Sun appliances use one SSD for the ZIL, and 
one as a read cache. For the 7210 (which is basically a Sun Fire X4540), 
that gives you 46 disks and 2 SSDs.

-Greg


Bob Friesenhahn wrote:
 On Thu, 22 Jan 2009, Ross wrote:
 
 However, now I've written that, Sun use SATA (SAS?) SSD's in their 
 high end fishworks storage, so I guess it definately works for some 
 use cases.
 
 But the fishworks (Fishworks is a development team, not a product) 
 write cache device is not based on FLASH.  It is based on DRAM.  The 
 difference is like night and day. Apparently there can also be a read 
 cache which is based on FLASH.
 
 Bob
 ==
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is scrubbing safe in 101b? (OpenSolaris 2008.11)

2009-01-23 Thread Glenn Lagasse
* David Dyer-Bennet (d...@dd-b.net) wrote:
 
 On Fri, January 23, 2009 09:52, casper@sun.com wrote:
 
 Which leaves me wondering, how safe is running a scrub?  Scrub is one of
 the things that made ZFS so attractive to me, and my automatic reaction
 when I first hook up the data disks during a recovery is run a scrub!.
 
 
  If your memory is bad, anything can happen.  A scrub can rewrite bad
  data; but it can be the case that the disk is fine but the memory is
  bad.  Then, if the data is replicated it can be copied and rewritten;
  it is then possible to write incorrect data (and if they need to recompute
  the checksum, then oops)
 
 The memory was ECC, so it *should* have mostly detected problems early
 enough to avoid writing bad data.  And so far nothing has been detected as
 bad in the pool during light use.  But I haven't yet run a scrub since
 fixing the memory, so I have no idea what horrors may be lurking in wait.
 
 The pool is two mirror vdevs, and then I have two backups on external hard
 drives, and then I have two sets of optical disks of the photos, one of
 them off-site (I'd lose several months of photos if I had to fall back to
 the optical disks, I'm a bit behind there).  So I'm not yet in great fear
 of actually losing anything, and have very little risk of actually losing
 a LOT.
 
 But what I'm wondering is, are there known bugs in 101b that make
 scrubbing inadvisable with that code?  I'd love to *find out* what horrors
 may be lurking.

There's nothing in the release notes for 2008.11 (based on 101b) about
issues running scrub.  I've been using 101b for some time now and
haven't seen or heard of any issues running scrub.

There's always bugs.  But I'm pretty certain there isn't a known 'zfs
scrub is inadvisable under any and all conditions' bug laying about.
I've certainly not heard of such a thing (and it would be pretty big
news for 2008.11 if true).

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is scrubbing safe in 101b? (OpenSolaris 2008.11)

2009-01-23 Thread David Dyer-Bennet

On Fri, January 23, 2009 12:01, Glenn Lagasse wrote:
 * David Dyer-Bennet (d...@dd-b.net) wrote:

 But what I'm wondering is, are there known bugs in 101b that make
 scrubbing inadvisable with that code?  I'd love to *find out* what
 horrors
 may be lurking.

 There's nothing in the release notes for 2008.11 (based on 101b) about
 issues running scrub.  I've been using 101b for some time now and
 haven't seen or heard of any issues running scrub.

 There's always bugs.  But I'm pretty certain there isn't a known 'zfs
 scrub is inadvisable under any and all conditions' bug laying about.
 I've certainly not heard of such a thing (and it would be pretty big
 news for 2008.11 if true).

Thanks.  I appreciate the qualifications, and agree that there are,
indeed, always bugs.  Okay; I think I'll make one last check of my own
through the buglist (so it won't be your fault if I muck up my data :-)),
and if nothing turns up that scares me I will run the scrub and see what
happens.  Hmmm; maybe update ONE of the two backups first.  It's at times
like this when I really miss relatively cheap backup tapes (current tapes
that I've seen aren't relatively cheap).

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bug report: disk replacement confusion

2009-01-23 Thread Scott L. Burson
Yes, everything seems to be fine, but that was still scary, and the fix was not 
completely obvious.  At the very least, I would suggest adding text such as the 
following to the page at http://www.sun.com/msg/ZFS-8000-FD :

When physically replacing the failed device, it is best to use the same 
controller port, so that the new device will have the same device name as the 
failed one.

-- Scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Import Problem

2009-01-23 Thread Michael McKnight

--- On Tue, 12/30/08, Andrew Gabriel agabr...@opensolaris.org wrote:
If you were doing a rolling upgrade, I suspect the old disks are all
horribly out of sync with each other?

If that is the problem, then if the filesystem(s) have a snapshot that
existed when all the old disks were still online, I wonder if it might
be possible to roll them back to it by hand, so it looks like the 
current live filesystem? I don't know if this is possible with zdb.


What I meant is that I rolled the new drives in one at a time... replace a 
drive, let it resilver, replace another, let it resilver. etc.  The filesystems 
were not mounted at the time so the data should have been static.  The only 
thing that changed (from what I can tell) is the zpool labels.

Thanks,
Michael



  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Import Problem

2009-01-23 Thread Michael McKnight
Yes, but I can't export a pool that has never been imported.  These drives are 
no longer connected to their original system, and at this point, when I connect 
them to their original system, the results are the same.

Thanks,
Michael


--- On Tue, 12/30/08, Weldon S Godfrey 3 wel...@excelsus.com wrote:
 
 Did you try zpool export 1st?


  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs smb public share, files created not public

2009-01-23 Thread Mark Shellenbaum
Roger wrote:
 Hi!
 Im running popensolaris b101 and ive made a zfs pool called tank and an fs 
 inside of it tank/public, ive shared it with smb.
 
 zfs set sharesmb=on tank/public
 
 im using solaris smb and not samba.
 
 The problem is this. When i connect and create a file its readable to 
 everyone but it should be 777. So if i create a file someone else cant erase 
 it. How do i do to get this to work?
 
 cheers
 -gob

It would be better to ask this question in cifs-disc...@opensolaris.org

I believe the following should get you the results you want.

# chmod A=everyone@:full_set:fd:allow /tank/public

then all cifs created files will have full access and anyone can 
read,write,delete,...

   -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] OpenSolaris Storage Summit Feb 23, 2009 San Francisco

2009-01-23 Thread Peter Buckingham
Hi All,

sorry for all the duplicates. Feel free to pass on to other interested
parties.

The OpenSolaris Storage Community is holding a Storage Summit on
February 23 at the Grand Hyatt San Francisco, prior to the FAST
conference.

The registration wiki is here:

https://wikis.sun.com/display/OpenSolaris/OpenSolaris+Storage+Summit+200902

We are still sorting out keynote speakers and the exact format. On the
wiki it's possible to suggest topics.

We will also be hosting open Community Calls on 1/26 and 2/17, details
to follow.

If you are interested in helping organise or anything related to this
event contact me:

   p...@sun.com

So once again the details:

   OpenSolaris Storage Summit
   February 23, 2009
   Grand Hyatt San Francisco
   345 Stockton Street,
   San Francisco, California, USA 94108

thanks and see you there!

peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] moving zfs root rpool between systems

2009-01-23 Thread John Arden
I have two 280R systems.  System A has Solaris 10u6, and its (2) drives 
are configured as a ZFS rpool, and are mirrored.  I would like to pull 
these drives, and move them to my other 280, system B, which is 
currently hard drive-less.

Although unsupported by Sun, I have done this before without issue using 
UFS.

I have also successfully moved zpool's from system to system, using 
zpool export  zpool import.  This is unique, at least to me as the root 
file system and the OS are in rpool.

Is this possible?

John
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS upgrade mangled my share

2009-01-23 Thread Colin Johnson
I was having CIFs problems on my Mac so I upgrade to build 105.
After getting all my shares populated with data I ran zpool scrub on  
the raidz array and it told me the version was out of date so I  
upgraded.

One of my shares is now inaccessible and I cannot even delete it :(


 r...@bitchko:/nas1/backups# zfs list -r /nas1/backups/
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 nas1/backups  33.3K  1.04T  33.3K  /nas1/backups

 r...@bitchko:/# zfs destroy /nas1/backups/
 cannot open '/nas1/backups/': invalid dataset name

 r...@bitchko:/nas1# rm -Rf backups/
 rm: cannot remove directory `backups/': Device busy



This is pretty scary for something that's storing valuable data.

Any ideas how I can destroy the dataset and recreate it?

Thanks

Colin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available

2009-01-23 Thread Jerry K
It was rumored that Nevada build 105 would have ZFS encrypted file 
systems integrated into the main source.

In reviewing the Change logs (URL's below) I did not see anything 
mentioned that this had come to pass.  Its going to be another week 
before I have a chance to play with b105.

Does anyone know specifically if b105 has ZFS encryption?

Thanks,

Jerry


 Original Message 
Subject: [osol-announce] SXCE Build 105 available
Date: Fri, 09 Jan 2009 08:58:40 -0800


Please find the links to SXCE Build 105 at:

 http://www.opensolaris.org/os/downloads/

This is still a DVD only release.

-
wget work around:

  http://wikis.sun.com/pages/viewpage.action?pageId=28448383

---
Changelogs:

ON (The kernel, drivers, and core utilities):

  http://dlc.sun.com/osol/on/downloads/b105/on-changelog-b105.html

X Window System:

http://opensolaris.org/os/community/x_win/changelogs/changelogs-nv_100/

- Derek
___
opensolaris-announce mailing list
opensolaris-annou...@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/opensolaris-announce
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirror rpool

2009-01-23 Thread mijenix
Richard Elling wrote:
 mijenix wrote:
 yes, that's the way zpool likes it

 I think I've to understand how (Open)Solaris create disks or how the 
 partition thing works under OSol. Do you know any guide or howto?
   
 
 We've tried to make sure the ZFS Admin Guide covers these things, including
 the procedure for mirroring the root pool by hand (Jumpstart can do it
 automatically :-)
 http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
 
 If it does not meet your needs, please let us know.
 -- richard
 

ok, ok I posted a newbie question, I got it. Holy RTFM ;-)
Thanks for the links and suggestions.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Changing from ZFS back to HFS+

2009-01-23 Thread Jason Todd Slack-Moehrle
Hi All,

Since switching to ZFS I get a lot of ³beach balls². I think for
productivity sake I should switch back to HFS+.  My home directory was on
this ZFS parition.

I backed up my data to another drive and tried using Disk Utility to select
my ZFS partition, un-mount it and format just that partition HFS+. This
failed. I manually unmounted my ZFS Partition using zfs unmount and it
unmounted but Disk Utility still gives an error.

Can I do this or am I looking at re-partitioning the whole drive and
starting from scratch with Leopard, BootCamp partition and an HFS+ partition
for my data?

Or, is a new release coming out that might relieve me of some of these
issues?

Thanks,
-Jason

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-23 Thread Paul Schlie
It also wouldn't be a bad idea for ZFS to also verify drives designated as
hot spares in fact have sufficient capacity to be compatible replacements
for particular configurations, prior to actually being critically required
(as if drives otherwise appearing to have equivalent capacity may not, it
wouldn't be a nice thing to first discover upon attempted replacement of a
failed drive).


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool import fails to find pool

2009-01-23 Thread James Nord
Hi all,

I moved from Sol 10 Update4 to update 6.

Before doing this I exported both of my zpools, and replace the discs 
containing the ufs root on with two new discs (these discs did not have any 
zpool /zfs info and are raid mirrored in hardware)

Once I had installed update6 I did a zpool import, but it only shows (and was 
able to) import one of the two pools.

Looking at dmesg it appears as though I have lost one drive (although they are 
all physically accounted for).  However both my pools are running raidz2 so 
even if I had lost one disc (another matter) I would still be able to expect to 
import them.

If I swap back to my ufs root discs for update 4 then I can see the pool with 
import...

I'm stuck as to what to do next?  Anyone any ideas?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS upgrade mangled my share

2009-01-23 Thread Tim Haley
Colin Johnson wrote:
 I was having CIFs problems on my Mac so I upgrade to build 105.
 After getting all my shares populated with data I ran zpool scrub on  
 the raidz array and it told me the version was out of date so I  
 upgraded.
 
 One of my shares is now inaccessible and I cannot even delete it :(
 
 r...@bitchko:/nas1/backups# zfs list -r /nas1/backups/
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 nas1/backups  33.3K  1.04T  33.3K  /nas1/backups
 
 r...@bitchko:/# zfs destroy /nas1/backups/
 cannot open '/nas1/backups/': invalid dataset name
 

There's a typo there, you would have to do

zfs destroy nas1/backups

Unfortunately, you can't use the mountpoint, you have to name the dataset.

-tim

 r...@bitchko:/nas1# rm -Rf backups/
 rm: cannot remove directory `backups/': Device busy

 
 
 This is pretty scary for something that's storing valuable data.
 
 Any ideas how I can destroy the dataset and recreate it?
 
 Thanks
 
 Colin
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available

2009-01-23 Thread Tim Haley
Jerry K wrote:
 It was rumored that Nevada build 105 would have ZFS encrypted file 
 systems integrated into the main source.
 
 In reviewing the Change logs (URL's below) I did not see anything 
 mentioned that this had come to pass.  Its going to be another week 
 before I have a chance to play with b105.
 
 Does anyone know specifically if b105 has ZFS encryption?
 
It does not.

-tim

 Thanks,
 
 Jerry
 
 
  Original Message 
 Subject: [osol-announce] SXCE Build 105 available
 Date: Fri, 09 Jan 2009 08:58:40 -0800
 
 
 Please find the links to SXCE Build 105 at:
 
  http://www.opensolaris.org/os/downloads/
 
 This is still a DVD only release.
 
 -
 wget work around:
 
   http://wikis.sun.com/pages/viewpage.action?pageId=28448383
 
 ---
 Changelogs:
 
 ON (The kernel, drivers, and core utilities):
 
   http://dlc.sun.com/osol/on/downloads/b105/on-changelog-b105.html
 
 X Window System:
 
 http://opensolaris.org/os/community/x_win/changelogs/changelogs-nv_100/
 
 - Derek
 ___
 opensolaris-announce mailing list
 opensolaris-annou...@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/opensolaris-announce
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import fails to find pool

2009-01-23 Thread Andrew Gabriel
James Nord wrote:
 Hi all,

 I moved from Sol 10 Update4 to update 6.

 Before doing this I exported both of my zpools, and replace the discs 
 containing the ufs root on with two new discs (these discs did not have any 
 zpool /zfs info and are raid mirrored in hardware)

 Once I had installed update6 I did a zpool import, but it only shows (and was 
 able to) import one of the two pools.

 Looking at dmesg it appears as though I have lost one drive (although they 
 are all physically accounted for).  However both my pools are running raidz2 
 so even if I had lost one disc (another matter) I would still be able to 
 expect to import them.

 If I swap back to my ufs root discs for update 4 then I can see the pool with 
 import...

 I'm stuck as to what to do next?  Anyone any ideas?
   

Look at the list of disks in the output of format(1M) in Update 4, 
compared with Update 6.
Are there any disks missing? (It shouldn't matter if the order is 
different.)

If there is a disk missing, is it on a different type of controller from 
the others? (Just wondering if a disk driver might have stopped working 
for some reason between the two releases.)

-- 
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status -x strangeness

2009-01-23 Thread Blake
A little gotcha that I found in my 10u6 update process was that 'zpool
upgrade [poolname]' is not the same as 'zfs upgrade
[poolname]/[filesystem(s)]'

What does 'zfs upgrade' say?  I'm not saying this is the source of
your problem, but it's a detail that seemed to affect stability for
me.


On Thu, Jan 22, 2009 at 7:25 AM, Ben Miller mil...@eecis.udel.edu wrote:
 The pools are upgraded to version 10.  Also, this is on Solaris 10u6.

 # zpool upgrade
 This system is currently running ZFS pool version 10.

 All pools are formatted using this version.

 Ben

 What's the output of 'zfs upgrade' and 'zpool
 upgrade'? (I'm just
 curious - I had a similar situation which seems to be
 resolved now
 that I've gone to Solaris 10u6 or OpenSolaris
 2008.11).



 On Wed, Jan 21, 2009 at 2:11 PM, Ben Miller
 mil...@eecis.udel.edu wrote:
  Bug ID is 6793967.
 
  This problem just happened again.
  % zpool status pool1
   pool: pool1
   state: DEGRADED
   scrub: resilver completed after 0h48m with 0
 errors on Mon Jan  5 12:30:52 2009
  config:
 
 NAME   STATE READ WRITE CKSUM
 pool1  DEGRADED 0 0 0
   raidz2   DEGRADED 0 0 0
 c4t8d0s0   ONLINE   0 0 0
 c4t9d0s0   ONLINE   0 0 0
 c4t10d0s0  ONLINE   0 0 0
 c4t11d0s0  ONLINE   0 0 0
 c4t12d0s0  REMOVED  0 0 0
 c4t13d0s0  ONLINE   0 0 0
 
  errors: No known data errors
 
  % zpool status -x
  all pools are healthy
  %
  # zpool online pool1 c4t12d0s0
  % zpool status -x
   pool: pool1
   state: ONLINE
  status: One or more devices is currently being
 resilvered.  The pool will
 continue to function, possibly in a degraded
 state.
  action: Wait for the resilver to complete.
   scrub: resilver in progress for 0h0m, 0.12% done,
 2h38m to go
  config:
 
 NAME   STATE READ WRITE CKSUM
 pool1  ONLINE   0 0 0
   raidz2   ONLINE   0 0 0
 c4t8d0s0   ONLINE   0 0 0
 c4t9d0s0   ONLINE   0 0 0
 c4t10d0s0  ONLINE   0 0 0
 c4t11d0s0  ONLINE   0 0 0
 c4t12d0s0  ONLINE   0 0 0
 c4t13d0s0  ONLINE   0 0 0
 
  errors: No known data errors
  %
 
  Ben
 
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting assistance.

2009-01-23 Thread Blake
I've seen reports of a recent Seagate firmware update bricking drives again.

What's the output of 'zpool import' from the LiveCD?  It sounds like
more than 1 drive is dropping off.



On Thu, Jan 22, 2009 at 10:52 PM, Brad Hill b...@thosehills.com wrote:
 I would get a new 1.5 TB and make sure it has the new
 firmware and replace
 c6t3d0 right away - even if someone here comes up
 with a magic solution, you
 don't want to wait for another drive to fail.

 The replacement disk showed up today but I'm unable to replace the one marked 
 UNAVAIL:

 r...@blitz:~# zpool replace tank c6t3d0
 cannot open 'tank': pool is unavailable

 I would in this case also immediately export the pool (to prevent any
 write attempts) and see about a firmware update for the failed drive
 (probably need windows for this).

 While I didn't export first, I did boot with a livecd and tried to force the 
 import with that:

 r...@opensolaris:~# zpool import -f tank
 internal error: Bad exchange descriptor
 Abort (core dumped)

 Hopefully someone on this list understands what situation I am in and how to 
 resolve it. Again, many thanks in advance for any suggestions you all have to 
 offer.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Changing from ZFS back to HFS+

2009-01-23 Thread Blake
This is primarily a list for OpenSolaris ZFS - OS X is a little different ;)

However, I think you need to do a 'sudo zpool destroy [poolname]' from
Terminal.app

Be warned, you can't go back once you have done this!



On Sun, Jan 18, 2009 at 4:42 PM, Jason Todd Slack-Moehrle
mailingli...@mailnewsrss.com wrote:
 Hi All,

 Since switching to ZFS I get a lot of beach balls. I think for
 productivity sake I should switch back to HFS+.  My home directory was on
 this ZFS parition.

 I backed up my data to another drive and tried using Disk Utility to select
 my ZFS partition, un-mount it and format just that partition HFS+. This
 failed. I manually unmounted my ZFS Partition using zfs unmount and it
 unmounted but Disk Utility still gives an error.

 Can I do this or am I looking at re-partitioning the whole drive and
 starting from scratch with Leopard, BootCamp partition and an HFS+ partition
 for my data?

 Or, is a new release coming out that might relieve me of some of these
 issues?

 Thanks,
 -Jason

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-23 Thread Blake
+1

On Thu, Jan 22, 2009 at 11:12 PM, Paul Schlie sch...@comcast.net wrote:
 It also wouldn't be a bad idea for ZFS to also verify drives designated as
 hot spares in fact have sufficient capacity to be compatible replacements
 for particular configurations, prior to actually being critically required
 (as if drives otherwise appearing to have equivalent capacity may not, it
 wouldn't be a nice thing to first discover upon attempted replacement of a
 failed drive).


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available

2009-01-23 Thread Mario Goebbels
 Does anyone know specifically if b105 has ZFS encryption?

IIRC it has been pushed back to b109.

-mg



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] moving zfs root rpool between systems

2009-01-23 Thread Tim
On Tue, Jan 6, 2009 at 4:23 PM, John Arden jarden.ar...@oryx.cc wrote:

 I have two 280R systems.  System A has Solaris 10u6, and its (2) drives
 are configured as a ZFS rpool, and are mirrored.  I would like to pull
 these drives, and move them to my other 280, system B, which is
 currently hard drive-less.

 Although unsupported by Sun, I have done this before without issue using
 UFS.

 I have also successfully moved zpool's from system to system, using
 zpool export  zpool import.  This is unique, at least to me as the root
 file system and the OS are in rpool.

 Is this possible?

 John



Same basic concept as the UFS move.  I can't think of anything that would
prevent you from doing so.  More to the point, if you pull the drives from
the current 280, stick them into the new 280, and give it a shot, you aren't
going to hurt anything doing so.  If you're really paranoid, issue a
snapshot before you pull the drives and power down the system.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss