[zfs-discuss] Re-write partition table cause the whole system crash

2007-04-10 Thread Xiangmin Ru
I'm using Open Solaris 10 doing some test on ZFS and Zpool.I've encountered a 
situation that caused the system crash.
There are two SCSI disks connected to my computer, c1t0d0 was used as the 
bootable disk, c1t1d0 was used to test ZFS and Zpool.

1. Formating c1t1d0 to four partitions:
format fdisk
 Total disk size is 8924 cylinders
 Cylinder size is 16065 (512 byte) blocks

   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===
  1 Solaris2   4463  66932231 25
  2 Solaris2   6694  7585 892 10
  3 Solaris2   7586  8477 892 10
  4   ActiveSolaris2  1  44624462 50

2. Creating a Zpool using the third partition:
# zpool create tank c1t1d0p3
# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
tank   6.81G   51.5K   6.81G 0%  ONLINE -
# 

3.Deleting all the partitions:
 Total disk size is 8924 cylinders
 Cylinder size is 16065 (512 byte) blocks

   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===




WARNING: no partitions are defined!

4.Destroying the zpool
# zpool destroy tank

After this command, the computer suddenly rebooted and stoped at GRUB.

And my questions are:
Why aren't there any warnings when or after I modified the disk's partition ?
Why do the modifications to the second disk affect the first disk?

Thanks for any help!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: other panic caused by ZFS

2007-04-10 Thread Gino
 Gino,
 
 Were you able to recover by setting zfs_recover?
 

Unfortunately no :(
Using zfs_recover not allowed us to recover any of the 5 corrupted zpool we had 
..

Please note that we lost this pool after a panic caused by trying to import a 
corrupted zpool!

tnx,
gino
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Unbelievable. an other crashed zpool :(

2007-04-10 Thread Gino
 Gino,
 
 Can you send me the corefile from the zpool command?

This is the only case where we are unable to import a corrupted zpool but not 
having a kernel panic:

SERVER144@/# zpool import zpool3
internal error: unexpected error 5 at line 773 of ../common/libzfs_pool.c
SERVER144@/#

 This looks like a 
 case where we can't open the device for some reason.
 Are you using a multi-pathing solution other than MPXIO?

no, we are using mpxio. no path or other devices failure.

gino
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS improvements

2007-04-10 Thread Gino
Hi All

I'd like to expose two points about ZFS that I think are a must before even 
trying to use it in production:


1) ZFS must stop to force kernel panics! 
As you know ZFS takes to a kernel panic when a corrupted zpool is found or if 
it's unable to reach
a device and so on...
We need to have it just fail with an error message but please stop crashing the 
kernel.


2) We need a way to recover a corrupted ZFS, trashing the last incompleted 
transactions.
Please give us zfsck :)


Waiting for comments,
gino
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Contents of transaction group?

2007-04-10 Thread Darren J Moffat

Atul Vidwansa wrote:

Hi,
   I have few questions about the way a transaction group is created.

1. Is it possible to group transactions related to multiple operations
in same group? For example, an rmdir foo followed by mkdir bar,
can these end up in same transaction group?

2. Is it possible for an operation (say write()) to occupie multiple
transaction groups?

3. Is it possible to know the thread id(s) for every commited txg_id?


What problem are you trying to solve here ?

Why do you think it would be useful to know after the fact which threads 
 did operations in a given transaction group ?


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot: a new heads-up

2007-04-10 Thread Robert Milkowski
Hello Lori,

 Any chances to get 'how_to_netinstall_zfsboot' to public?
 

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS improvements

2007-04-10 Thread William D. Hathaway
There was some discussion on the always panic for fatal pool failures issue 
in April 2006, but I haven't seen if an actual RFE was generated.
http://mail.opensolaris.org/pipermail/zfs-discuss/2006-April/017276.html
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS vs. rmvolmgr

2007-04-10 Thread Constantin Gonzalez
Hi,

while playing around with ZFS and USB memory sticks or USB harddisks,
rmvolmgr tends to get in the way, which results in a

  can't open /dev/rdsk/cNt0d0p0, device busy

error.

So far, I've just said svcadm disable -t rmvolmgr, did my thing, then
said svcadm enable rmvolmgr.

Is there a more elegant approach that tells rmvolmgr to leave certain
devices alone on a per disk basis?

For instance, I'm now running several USB disks with ZFS pools on them, and
even after restarting rmvolmgr or rebooting, ZFS, the disks and rmvolmgr
get along with each other just fine.

What and how does ZFS tell rmvolmgr that a particular set of disks belongs
to ZFS and should not be treated as removable?

Best regards,
   Constantin

-- 
Constantin GonzalezSun Microsystems GmbH, Germany
Platform Technology Group, Global Systems Engineering  http://www.sun.de/
Tel.: +49 89/4 60 08-25 91   http://blogs.sun.com/constantin/

Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Marcel Schneider, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Add mirror to an existing Zpool

2007-04-10 Thread Martin Girard
Hi,

I have a zpool with only one disk. No mirror.
I have some data in the file system.

Is it possible to make my zpool redundant by adding a new disk in the pool
and making it a mirror with the initial disk?
If yes, how?

Thanks

Martin
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Add mirror to an existing Zpool

2007-04-10 Thread Jeremy Teo

Read the man page for zpool. Specifically, zpool attach.

On 4/10/07, Martin Girard [EMAIL PROTECTED] wrote:

Hi,

I have a zpool with only one disk. No mirror.
I have some data in the file system.

Is it possible to make my zpool redundant by adding a new disk in the pool
and making it a mirror with the initial disk?
If yes, how?

Thanks

Martin


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Add mirror to an existing Zpool

2007-04-10 Thread Cindy . Swearingen

Hi Martin,

Yes, you can do this with the zpool attach command.

See the output below.

An example in the ZFS Admin Guide is here:

http://docs.sun.com/app/docs/doc/817-2271/6mhupg6ft?a=view


Cindy

# zpool create mpool c1t20d0
# zpool status mpool
  pool: mpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mpool   ONLINE   0 0 0
  c1t20d0   ONLINE   0 0 0

errors: No known data errors
# zpool attach mpool c1t20d0 c1t21d0
# zpool status mpool
  pool: mpool
 state: ONLINE
 scrub: resilver completed with 0 errors on Tue Apr 10 08:52:30 2007
config:

NAME STATE READ WRITE CKSUM
mpoolONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t20d0  ONLINE   0 0 0
c1t21d0  ONLINE   0 0 0

errors: No known data errors
#


Martin Girard wrote:

Hi,

I have a zpool with only one disk. No mirror.
I have some data in the file system.

Is it possible to make my zpool redundant by adding a new disk in the pool
and making it a mirror with the initial disk?
If yes, how?

Thanks

Martin
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Add mirror to an existing Zpool

2007-04-10 Thread Mark J Musante
On Tue, 10 Apr 2007, Martin Girard wrote:

 Is it possible to make my zpool redundant by adding a new disk in the pool
 and making it a mirror with the initial disk?

Sure, by using zpool attach:

# mkfile 64m /tmp/foo /tmp/bar
# zpool create tank /tmp/foo
# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  /tmp/foo  ONLINE   0 0 0

errors: No known data errors
# zpool attach tank /tmp/foo /tmp/bar
# zpool status
  pool: tank
 state: ONLINE
 scrub: resilver completed with 0 errors on Tue Apr 10 10:51:58 2007
config:

NAME  STATE READ WRITE CKSUM
tank  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
/tmp/foo  ONLINE   0 0 0
/tmp/bar  ONLINE   0 0 0

errors: No known data errors
#



Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot: a new heads-up

2007-04-10 Thread Lori Alt

Robert Milkowski wrote:

Hello Lori,

 Any chances to get 'how_to_netinstall_zfsboot' to public?
 

  

I'm really close to putting it out there.   I'm updating
the install procedure and tool to support two things
that it didn't support before:

*  setup of a dump slice, since zfs doesn't yet support
   dumping into a zvol.  A zvol should be used for swap
  
*  splitting the Solaris name space into separate datasets

   for root, /usr, /var, /opt and /export.  This is not required
   at this time, but we are likely to require it, or at least
   strongly recommend it, in the released version, because
   it simplifies some liveupgrade issues.

Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: CAD application not working with zfs

2007-04-10 Thread Bart Smaalders

Dirk Jakobsmeier wrote:

Hello Basrt,

tanks for your answer. The filesystems on different projects are sized between 
20 to 400 gb. Those filesystem sizes where no problem on earlier installation 
(vxfs) and should not be a problem now. I can reproduce this error with the 20 
gb filesystem.

Regards.
 
 


Are you using nfsv4 for the mount?  Or nfsv3?

Some idea of the failing app's system calls just prior to failure
may yield the answer as to what's causing the problem.  These
problems are usually mishandled error conditions...

- Bart


--
Bart Smaalders  Solaris Kernel Performance
[EMAIL PROTECTED]   http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Poor man's backup by attaching/detaching mirror drives on a _striped_ pool?

2007-04-10 Thread Mark J Musante
On Tue, 10 Apr 2007, Constantin Gonzalez wrote:

 Has anybody tried it yet with a striped mirror? What if the pool is
 composed out of two mirrors? Can I attach devices to both mirrors, let
 them resilver, then detach them and import the pool from those?

You'd want to export them, not detach them.  Detaching will overwrite the
vdev labels and make it un-importable.

Off the top of my head (i.e. untested):

 - zpool create tank mirror dev1 dev2 dev3
 - zpool export tank
 - {physically move dev3 to new box}
 - zpool import tank
 - zpool detach tank dev3

On the new box:
 - zpool import tank
 - zpool detach tank dev1
 - zpool detach tank dev2



Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs. rmvolmgr

2007-04-10 Thread Justin Stringfellow



Is there a more elegant approach that tells rmvolmgr to leave certain
devices alone on a per disk basis?


I was expecting there to be something in rmmount.conf to allow a specific device 
or pattern to be excluded but there appears to be nothing. Maybe this is an RFE?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS improvements

2007-04-10 Thread Eric Schrock
On Tue, Apr 10, 2007 at 12:48:49AM -0700, Gino wrote:
 Hi All
 
 I'd like to expose two points about ZFS that I think are a must before even 
 trying to use it in production:
 
 
 1) ZFS must stop to force kernel panics! 
 As you know ZFS takes to a kernel panic when a corrupted zpool is found or if 
 it's unable to reach
 a device and so on...
 We need to have it just fail with an error message but please stop crashing 
 the kernel.

This is:

6322646 ZFS should gracefully handle all devices failing (when writing)

Which is being worked on.  Using a redundant configuration prevents this
from happening.

 2) We need a way to recover a corrupted ZFS, trashing the last incompleted 
 transactions.
 Please give us zfsck :)

Please the ZFS FAQ at:

http://www.opensolaris.org/os/community/zfs/faq/#whynofsck

Writing such a tool is effectively impossible.  For the one known
corruption bug we've encountered (and since fixed, we provided the
'zfs_recover' /etc/system switch, but it only works for this particular
bug.  Without understanding the underlying pathology it's impossible to
fix a ZFS pool.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Poor man's backup by attaching/detaching mirror

2007-04-10 Thread Darren Dunham
 one quickdirty way of backing up a pool that is a mirror of two
 devices is to zpool attach a third one, wait for the resilvering to
 finish, then zpool detach it again.

 The third device then can be used as a poor man's simple backup.

How would you access the data on that device?

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
  This line left intentionally blank to confuse you. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Something like spare sectors...

2007-04-10 Thread Mark Maybee

Anton B. Rang wrote:

This sounds a lot like:

6417779 ZFS: I/O failure (write on ...) -- need to
reallocate writes

Which would allow us to retry write failures on
alternate vdevs.


Of course, if there's only one vdev, the write should be retried to a different 
block on the original vdev ... right?


Yes, although it depends on the nature of the write failure.  If the
write failed because the device is no longer available, ZFS will not
continue to try different blocks.

-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshot issues.

2007-04-10 Thread Mark Maybee

Joseph Barbey wrote:

Matthew Ahrens wrote:

Joseph Barbey wrote:

Robert Milkowski wrote:

JB So, normally, when the script runs, all snapshots finish in 
maybe a minute
JB total.  However, on Sundays, it continues to take longer and 
longer.   On
JB 2/25 it took 30 minutes, and this last Sunday, it took 2:11.  
The only
JB thing special thing about Sunday's snapshots is that they are 
the first
JB ones created since the full backup (using NetBackup) on 
Saturday. All

JB other backups are incrementals.

hm do you have atime property set to off?
Maybe you spend most of the time in destroying snapshots due to much
larger delta coused by atime updates? You can possibly also gain some
performance by setting atime to off.


Yep, atime is set to off for all pools and filesystems.  I looked 
through the other possible properties, and nothing really looked like 
it would really affect things.


One additional weird thing.  My script hits each filesystem 
(email-pool/A..Z) individually, so I can run zfs list -t snapshot and 
find out how long each snapshot actually takes.  Everything runs fine 
until I get to around V or (normally) W.  Then it can take a couple 
of hours on the one FS.  After that, the rest go quickly.


So, what operation exactly is taking a couple of hours on the one 
FS?  The only one I can imagine taking more than a minute would be 
'zfs destroy', but even that should be very rare on a snapshot.  Is it 
always the same FS that takes longer than the rest?  Is the pool busy 
when you do the slow operation?


I've now determined that renaming the previous snapshot seems to be the 
problem in certain instances.


What we are currently doing through the script is to keep 2 weeks of 
daily snapshots of the various pool/filesystems.  These snapshots are 
named {fs}.$Day-2, {fs}.$Day-2, and {fs}.snap.  Specifically, for our 
'V' filesystem, which is created under the email-pool, I will have the 
following snapshots:


  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]
  email-pool/[EMAIL PROTECTED]

So, my script does the following for each FS:
  Check for FS.$Day-2.  If exists, then destroy it.
  Check if there is a FS.$Day-1.  If so, rename it to $DAY-2.
  Check for FS.snap. If so, rename to FS.$Yesterday-1 (day it was created).
  Create FS.snap

I added logging to a file, along with the action just run and the time 
that it completed:


  Destroy email-pool/[EMAIL PROTECTED]Sun Apr  8 00:01:04 CDT 2007
  Rename email-pool/[EMAIL PROTECTED] email-pool/[EMAIL PROTECTED]Sun Apr  8 
00:01:05 CDT 2007
  Rename email-pool/[EMAIL PROTECTED] email-pool/[EMAIL PROTECTED]Sun Apr  8 
00:54:52 CDT 2007

  Create email-pool/[EMAIL PROTECTED]Sun Apr  8 00:54:53 CDT 2007

Looking at the above, Rename took from 00:01:05 until 00:54:52, so 
almost 54 minutes.


So, any ideas on why a rename should take so long?  And again, why is 
this only happening on Sunday?  Any other information I can provide that 
might help diagnose this?



This could be an instance of:

6509628 unmount of a snapshot (from 'zfs destroy') is slow

The fact that this bug comes from a destroy op is not relevant, what is
relevant is the required unmount (also required in a rename op).  Has
there been recent activity in the Sunday-1 snapshot (like a backup or
'find' perhaps)?  This will cause the unmount to proceed very slowly.

-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Renaming a pool?

2007-04-10 Thread Rich Teer
Hi all,

I have a pool called tank/home/foo and I want to rename it to
tank/home/bar.  What's the best way to do this (the zfs and
zpool man pages don't have a rename option)?

One way I can think of is to create a clone of tank/home/foo
called tank/home/bar, and then destroy the former.  Is that
the best (or even only) way?

TIA,

-- 
Rich Teer, SCSA, SCNA, SCSECA, OGB member

CEO,
My Online Home Inventory

Voice: +1 (250) 979-1638
URLs: http://www.rite-group.com/rich
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: zfs destroy snapshot takes hours

2007-04-10 Thread xx
i am having a similar problem - system hung on zfs destroy snapshot - 50% cpu 
utilization - running for hours - how can i know if i have the same problem? 
can you be specific about hpw to set the kernelbase?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Renaming a pool?

2007-04-10 Thread Mark J Musante
On Tue, 10 Apr 2007, Rich Teer wrote:

 I have a pool called tank/home/foo and I want to rename it to
 tank/home/bar.  What's the best way to do this (the zfs and zpool man
 pages don't have a rename option)?

In fact, there is a rename option for zfs:

# zfs create tank/home
# zfs create tank/home/foo
# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
tank   2.73G  5.52G  2.73G  /tank
tank/home36K  5.52G18K  /tank/home
tank/home/foo18K  5.52G18K  /tank/home/foo
# zfs rename tank/home/foo tank/home/bar
# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
tank   2.73G  5.52G  2.73G  /tank
tank/home38K  5.52G20K  /tank/home
tank/home/bar18K  5.52G18K  /tank/home/bar
#


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Renaming a pool?

2007-04-10 Thread Rich Teer
On Tue, 10 Apr 2007, Mark J Musante wrote:

 On Tue, 10 Apr 2007, Rich Teer wrote:
 
  I have a pool called tank/home/foo and I want to rename it to
  tank/home/bar.  What's the best way to do this (the zfs and zpool man
  pages don't have a rename option)?
 
 In fact, there is a rename option for zfs:

Doh!  That's what I get for reading the man page too fast...  :-/

Many thanks,

-- 
Rich Teer, SCSA, SCNA, SCSECA, OGB member

CEO,
My Online Home Inventory

Voice: +1 (250) 979-1638
URLs: http://www.rite-group.com/rich
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # devices in raidz.

2007-04-10 Thread Mike Seda
I noticed that there is still an open bug regarding removing devices 
from a zpool:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783
Does anyone know if or when this feature will be implemented?


Cindy Swearingen wrote:

Hi Mike,

Yes, outside of the hot-spares feature, you can detach, offline, and 
replace existing devices in a pool, but you can't remove devices, yet.


This feature work is being tracked under this RFE:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783

Cindy

Mike Seda wrote:

Hi All,
 From reading the docs, it seems that you can add devices 
(non-spares) to a zpool, but you cannot take them away, right?

Best,
Mike


Victor Latushkin wrote:


Maybe something like the slow parameter of VxVM?

   slow[=iodelay]
Reduces toe system performance impact of copy
operations.  Such operations are usually per-
formed on small regions of the  volume  (nor-
mally  from  16  kilobytes to 128 kilobytes).
This  option  inserts  a  delay  between  the
recovery  of  each  such  region . A specific
delay can be  specified  with  iodelay  as  a
number  of milliseconds; otherwise, a default
is chosen (normally 250 milliseconds).



For modern machines, which *should* be the design point, the channel
bandwidth is underutilized, so why not use it?

NB. At 4 128kByte iops per second, it would take 11 days and 8 hours
to resilver a single 500 GByte drive -- feeling lucky?  In the bad old
days when disks were small, and the systems were slow, this made some
sense.  The better approach is for the file system to do what it needs
to do as efficiently as possible, which is the current state of ZFS.



Well, we are trying to balance impact of resilvering on running 
applications with a speed of resilvering.


I think that having an option to tell filesystem to postpone 
full-throttle resilvering till some quieter period of time may help. 
This may be combined with some throttling mechanism so during 
quieter period resilvering is done with full speed, and during busy 
period it may continue with reduced speed. Such arrangement may be 
useful for customers with e.g. well-defined SLAs.


Wbr,
Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: zfs destroy snapshot takes hours

2007-04-10 Thread xx
the release notes:
http://docs.sun.com/app/docs/doc/817-0552/6mgbi4fgg?a=view
say an alternative to fixing the kernelbase is to upgrade to 64 bit - i'm 
already running on a 64 bit sparc. maybe i have a different problem - my drives 
have spun down to sleepy mode - zfs is still burning coal.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Renaming a pool?

2007-04-10 Thread Darren J Moffat

Rich Teer wrote:

Hi all,

I have a pool called tank/home/foo and I want to rename it to
tank/home/bar.  What's the best way to do this (the zfs and
zpool man pages don't have a rename option)?


Are you sure you have a pool with that name not a filesystem in a pool 
with that name ?


See zfs(1) it does have a rename.


One way I can think of is to create a clone of tank/home/foo
called tank/home/bar, and then destroy the former.  Is that
the best (or even only) way?


zfs rename tank/home/foo tank/home/bar

Would be the best and documented way.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Overview (rollup) of recent activity on zfs-discuss

2007-04-10 Thread Eric Boutilier

For background on what this is, see:

http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200

=
zfs-discuss 03/16 - 03/31
=

Size of all threads during period:

Thread size Topic
--- -
 37   ZFS Boot support for the x86 platform
 22   ZFS memory and swap usage
 19   File level snapshots in ZFS?
 16   ZFS over iSCSI question
 16   6410 expansion shelf
 15   asize is 300MB smaller than lsize - why?
 15   ZFS with raidz
 15   /tmp on ZFS?
 14   migration/acl4 problem
 14   ZFS performance with Oracle
 13   Proposal: ZFS hotplug support and autoconfiguration
 12   error-message from a nexsan-storage
 11   Data Management API
 11   Backup of ZFS Filesystem with ACL 4
 10   Live Upgrade with zfs root?
  9   today panic ...
  9   Zones on large ZFS filesystems
  9   ZFS layout for 10 disk?
  9   Is there any performance problem with hard links in ZFS?
  8   ZFS and Kstats
  8   ZFS and Firewire/USB enclosures
  7   gzip compression support
  7   ZFS overhead killed my ZVOL
  7   ZFS checksum error detection
  7   ZFS and UFS performance
  7   C'mon ARC, stay small...
  6   ditto blocks for use data integrated in b61
  6   How big a write to a regular file is atomic?
  5   status of user delegation
  5   ZFS filesystem online backup question
  5   Pathological ZFS performance
  5   Migrating a pool
  5   HELP!! I can't mount my zpool!!
  5   Gzip compression for ZFS
  4   zfs send speed
  4   update on zfs boot support
  4   symlinks and ditto blocks
  4   lost zfs mirror, need some help to recover
  4   ZFS machine to be reinstalled
  4   ZFS filesystem disappeared after reboot?
  3   zpool/zfs size discrepency
  3   missing features?Could/should zfs support a new ioctl, 
constrained if neede
  3   mirror question
  3   crash during snapshot operations
  3   ZFS ontop of SVM - CKSUM errors
  3   The value of validating your backups...
  3   Proposal: ZFS hotplug supportandautoconfiguration
  3   Large ZFS-bug...
  3   ISCSI + ZFS + NFS
  3   Heads up: 'zpool history' on-disk version change
  3   Fwd: ZFS and Firewire/USB enclosures
  3   Detecting failed drive under MPxIO + ZFS
  2   user id mapping of exported fs
  2   understanding zfs/thunoer bottlenecks?
  2   a 30mb ZFS OS install
  2   ZFS resilver/snap/scrub resetting status?
  2   ZFS mount fails at boot
  2   Recommended setup?
  2   Atomic setting of properties?
  2   (1) zfs list memory usage (2) zfs send/recv safety
  1   zfs destroy snapshot takes hours
  1   s10u3 (125101-03) isci zfs status
  1   mount/umount test and ufs/zfs comparison
  1   intel SSR212CC wasabi
  1   how to delete one mirror of zfs pool?
  1   baby milo bbc bape bathing ape clothing clothes billionaire boys 
club lrg
  1   ZFS performance problems - solved
  1   ZFS and file checksums
  1   ZFS Web administration interface
  1   Why replacing a drive generates writes to other disks?
  1   Samba and ZFS ACL Question
  1   REMINDER: FROSUG March Meeting Announcement (3/29/2007)
  1   Proposal: ZFS hotplug
  1   Pool problem
  1   Is there any performance problem with hard
  1   Assertion raised during zfs share?
  1   log structured vs fixed block fs


Posting activity by person for period:

# of posts  By
--   --
 48   rmilkowski at task.gda.pl (robert milkowski)
 20   richard.elling at sun.com (richard elling)
 17   matthew.ahrens at sun.com (matthew ahrens)
 16   malachid at gmail.com (=?iso-8859-1?q?malachi_de_=c6lfweald?=)
 13   jeff.sutch at acm.org (js)
 11   mattbreedlove at yahoo.com (matt b)
 10   rheilke at dragonhearth.com (rainer heilke)
  9   roch.bourbonnais at sun.com (roch - pae)
  9   fcusack at fcusack.com (frank cusack)
  8   weeyeh at gmail.com (wee yeh tan)
  8   rang at acm.org (anton b. rang)
  8   mark.shellenbaum at sun.com (mark shellenbaum)
  8   ginoruopolo at hotmail.com (gino ruopolo)
  8   emptysands at gmail.com (nicholas lee)
  8   darren.moffat at sun.com (darren j moffat)
  7   thomas.nau at uni-ulm.de (thomas nau)
  7   

Re: [zfs-discuss] ZFS vs. rmvolmgr

2007-04-10 Thread Artem Kachitchkine



while playing around with ZFS and USB memory sticks or USB harddisks,
rmvolmgr tends to get in the way, which results in a

  can't open /dev/rdsk/cNt0d0p0, device busy


Do you remember exactly what command/operation resulted in this error? It is 
something that tries to open device exclusively.



So far, I've just said svcadm disable -t rmvolmgr, did my thing, then
said svcadm enable rmvolmgr.


This can't possibly be true, because rmvolmgr does not open devices. You'd need 
to also disable the 'hal' service. Run fuser on your device and you'll see it's 
one of the hal addons that keeps it open:


# ptree | grep hal
114531 /usr/lib/hal/hald --daemon=yes
  114532 hald-runner
114537 /usr/lib/hal/hald-addon-storage
114540 /usr/lib/hal/hald-addon-storage
114558 /usr/lib/hal/hald-addon-storage
# fuser /dev/rdsk/c2t0d0
/dev/rdsk/c2t0d0:   114558o
# truss -p 114558
ioctl(4, 0x040D, 0x08047708)(sleeping...)
^C# grep 'DKIOC|13' /usr/include/sys/dkio.h
#define DKIOCSTATE  (DKIOC|13)  /* Inquire insert/eject state */

HAL needs to know when a disk is hot-removed (even while opened by other 
processes/filesystems) and DKIOCSTATE is the Solaris way of achieving that.



Is there a more elegant approach that tells rmvolmgr to leave certain
devices alone on a per disk basis?

For instance, I'm now running several USB disks with ZFS pools on them, and
even after restarting rmvolmgr or rebooting, ZFS, the disks and rmvolmgr
get along with each other just fine.


I'm confused here. In the beginning you said that something got in the way, but 
now you're saying they get along just fine. Could you clarify.



What and how does ZFS tell rmvolmgr that a particular set of disks belongs
to ZFS and should not be treated as removable?


One possible workaround would be to match against USB disk's serial number and 
tell hal to ignore it using fdi(4) file. For instance, find your USB disk in 
lshal(1M) output, it will look like this:


udi = '/org/freedesktop/Hal/devices/pci_0_0/pci1028_12c_1d_7/storage_5_0'
  usb_device.serial = 'DEF1061F7B62'  (string)
  usb_device.product_id = 26672  (0x6830)  (int)
  usb_device.vendor_id = 1204  (0x4b4)  (int)
  usb_device.vendor = 'Cypress Semiconductor'  (string)
  usb_device.product = 'USB2.0 Storage Device'  (string)
  info.bus = 'usb_device'  (string)
  info.solaris.driver = 'scsa2usb'  (string)
  solaris.devfs_path = '/[EMAIL PROTECTED],0/pci1028,[EMAIL PROTECTED],7/[EMAIL 
PROTECTED]'  (string)

You want to match an object with this usb_device.serial property and set 
info.ignore property to true. The fdi(4) would look like this:


# cat  /etc/hal/fdi/preprobe/30user/10-ignore-usb.fdi
?xml version=1.0 encoding=UTF-8?
deviceinfo version=0.2
  device
match key=usb_device.serial string=DEF1061F7B62
  merge key=info.ignore type=booltrue/merge
/match
  /device
/deviceinfo

Once the fdi is in place, 'svcadm restart hal' to enable it.

Eventually we'll need better interaction between HAL and ZFS.

-Artem.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Samba and ZFS ACL Question

2007-04-10 Thread Jiri Sasek
3.0.25rc1 was released 2 days ago so the final version will be available 
soon. vfs_zfsacl.c module was tested soon so I think it is a question of 2-3 
weeks.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Poor man's backup by attaching/detaching mirrordrives on a _striped_ pool?

2007-04-10 Thread Anton B. Rang
 You'd want to export them, not detach them.

But you can't export just one branch of the mirror, can you?

 Off the top of my head (i.e. untested):
 
  - zpool create tank mirror dev1 dev2 dev3
 - zpool export tank

But this will unmount all the file systems, right?

-- Anton
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Poor man's backup by attaching/detaching mirror

2007-04-10 Thread Anton B. Rang
 How would you access the data on that device?

Presumably, zpool import.

This is basically what everyone does today with mirrors, isn't it? :-)

Anton
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS improvements

2007-04-10 Thread Anton B. Rang
 please stop crashing the kernel.
 
 This is:
 
 6322646 ZFS should gracefully handle all devices failing (when writing)

That's only one cause of panics.

At least two of gino's panics appear due to corrupted space maps, for instance. 
I think there may also still be a case where a failure to read metadata during 
a transaction commit leads to a panic, too. Maybe that one's been fixed, or 
maybe it will be handled by the above bug.

Maybe someone needs to file a bug/RFE to remove all panics from ZFS, at least 
in non-debug builds? The QFS approach is to panic when inconsistency is found 
on debug builds, but return an appropriate error code on release builds, which 
seems reasonable.

I/O errors, of course, should never lead to a panic. I think we [you] fixed all 
of those cases in UFS, and QFS, long ago.

Anton
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS improvements

2007-04-10 Thread Anton B. Rang
 Without understanding the underlying pathology it's impossible to fix a ZFS 
 pool.

Sorry, but I have to disagree with this.

The goal of fsck is not to bring a file system into the state it should be in 
had no errors occurred. The goal, rather, is to bring a file system to a 
self-consistent state. Ideally, data should be recoverable when it's believed 
to be good (ZFS has a big advantage here, since the checksums can be used to 
validate block pointers).

The ZFS on-disk data structure is basically a tree. zfsck could fairly easily 
walk the tree and ensure that, for instance, pools are at the top level; space 
maps match allocated blocks; block pointers from multiple files don't overlap; 
file lengths match their allocation; ACLs are not corrupted; compressed data is 
not damaged; directories are in the proper format; etc.

This might be impractical for a large file system, of course. It might be 
easier to have a 'zscavenge' that would recover data, where possible, from a 
corrupted file system. But there should be at least one of these. Losing a 
whole pool due to the corruption of a couple of blocks of metadata is a Bad 
Thing.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS improvements

2007-04-10 Thread Eric Schrock
On Tue, Apr 10, 2007 at 09:43:39PM -0700, Anton B. Rang wrote:
 
 That's only one cause of panics.
 
 At least two of gino's panics appear due to corrupted space maps, for
 instance. I think there may also still be a case where a failure to
 read metadata during a transaction commit leads to a panic, too. Maybe
 that one's been fixed, or maybe it will be handled by the above bug.

The space map bugs should have been fixed as part of:

6458218 assertion failed: ss == NULL

Which went into Nevada build 60.  There are several different
pathologies that can result from this bug, and I don't know if the
panics are from before or after this fix.  I hope folks from the ZFS
team are investigating, but I can't speak for everyone.

 Maybe someone needs to file a bug/RFE to remove all panics from ZFS,
 at least in non-debug builds? The QFS approach is to panic when
 inconsistency is found on debug builds, but return an appropriate
 error code on release builds, which seems reasonable.

In order to do this we need to fix 6322646 first, which addresses the
issue of 'backing out' of a transaction once we're down in the ZIO layer
discovering these problems.  It doesn't matter if it's due to an I/O
error or space map inconsistency or I/O error if we can't propagate the
error.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: CAD application not working with zfs

2007-04-10 Thread Dirk Jakobsmeier
Hello Bart,

we´re using nfsv3 by default but also tried version 2. There is no difference 
between the two. Version 4 ist not possible with the aix 4.3.3 clients.

Regards
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss