Re: [zfs-discuss] APPLE: ZFS need bug corrections instead of new func! Or?

2009-06-20 Thread Dave



Haudy Kazemi wrote:



I think a better question would be: what kind of tests would be most
promising for turning some subclass of these lost pools reported on
the mailing list into an actionable bug?

my first bet would be writing tools that test for ignored sync cache
commands leading to lost writes, and apply them to the case when iSCSI
targets are rebooted but the initiator isn't.

I think in the process of writing the tool you'll immediately bump
into a defect, because you'll realize there is no equivalent of a
'hard' iSCSI mount like there is in NFS.  and there cannot be a strict
equivalent to 'hard' mounts in iSCSI, because we want zpool redundancy
to preserve availability when an iSCSI target goes away.  I think the
whole model is wrong somehow.
  
I'd surely hope that a ZFS pool with redundancy built on iSCSI targets 
could survive the loss of some targets whether due to actual failures or 
necessary upgrades to the iSCSI targets (think OS upgrades + reboots on 
the systems that are offering iSCSI devices to the network.)




I've had a mirrored zpool created from solaris iSCSI target servers in 
production since April 2008. I've had disks die and reboots of the 
target servers - ZFS has handled them very well. My biggest wish is to 
be able to tune the iSCSI timeout value so ZFS can failover reads/writes 
to the other half of the mirror quicker than it does now (about 180 
seconds on my config). A minor gripe considering the features that ZFS 
provides.


I've also had the zfs server (the initiator aggregating the mirrored 
disks) unintentionally power cycled with the iscsi zpool imported. The 
pool re-imported and scrubbed fine.


ZFS is definitely my FS of choice - by far.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mobo SATA migration to AOC-SAT2-MV8 SATA card

2009-06-20 Thread dick hoogendijk
On Fri, 19 Jun 2009 16:42:43 -0700
Jeff Bonwick jeff.bonw...@sun.com wrote:

 Yep, right again.

That is, if the boot drives are not one of those.. ;-)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mobo SATA migration to AOC-SAT2-MV8 SATA card

2009-06-20 Thread Louis-Frédéric Feuillette
A couple questions out of pure curiosity.

Working on the assumption that you are going to be adding more drives to
your server, why not just add the new drives to the Supermicro
controller and keep the existing pool (well vdev) where it is?

Reading your blog, it seems that you need one (or two if you are
mirroring) SATA ports for your rpool.  Why not just migrate two drives
to the new controller and leave the others where they are?  OpenSolaris
won't card where the drives are physically connected as long as you
export/import.

-Jebnor

On Fri, 2009-06-19 at 16:21 -0700, Simon Breden wrote:
 Hi,
 
 I'm using 6 SATA ports from the motherboard but I've now run out of SATA 
 ports, and so I'm thinking of adding a Supermicro AOC-SAT2-MV8 8-port SATA 
 controller card.
 
 What is the procedure for migrating the drives to this card?
 Is it a simple case of (1) issuing a 'zpool export pool_name' command, (2) 
 shutdown, (3) insert card and move all SATA cables for drives from mobo to 
 card, (4) boot and issue a 'zpool import pool_name' command ?
 
 Thanks,
 Simon
 
 http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
Louis-Frédéric Feuillette jeb...@gmail.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server Cloning With ZFS?

2009-06-20 Thread Fajar A. Nugraha
On Sat, Jun 20, 2009 at 9:18 AM, Dave Ringkorno-re...@opensolaris.org wrote:
 What would be wrong with this:
 1) Create a recursive snapshot of the root pool on homer.
 2) zfs send this snapshot to a file on some NFS server.
 3) Boot my 220R (same architecture as the E450) into single user mode from a 
 DVD.
 4) Create a zpool on the 220R's local disks.
 5) zfs receive the snapshot created in step 2 to the new pool.
 6) Set the bootfs property.
 7) Reboot the 220R.

 Now my 220R comes up as homer, with its IP address, users, root pool 
 filesystems, any software that was installed in the old homer's root pool, 
 etc.

No, your 220R will most likely be unbootable. Because you haven't run
installboot. See the link I sent earlier. Other than that, the steps
should work out fine. I've only tested it on two servers of the same
type though (both are T2000).

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-20 Thread Fajar A. Nugraha
On Sat, Jun 20, 2009 at 2:53 AM, Miles Nordincar...@ivy.net wrote:
 fan == Fajar A Nugraha fa...@fajar.net writes:
 et == Erik Trimble erik.trim...@sun.com writes:

   fan The N610N that I have (BCM3302, 300MHz, 64MB) isn't even
   fan powerful enough to saturate either the gigabit wired

 I can't find that device.  Did you misspell it or something?  BCM
 probably means Broadcom, and Broadcom is probably MIPS---it's TI
 (omap) and Marvell (orion) that are selling arm.

Correct, it's MIPS. My point is the embedded device and cheap netbook
I've used aren't likely to be powerful enough for zfs.

I have the impression that common ARM-based appliances today (like
DLink's DNS-323 NAS, 500 Mhz Marvell 88F5181 proprietary Feroceon ARM)
would have similar performace characteristic and was wondering whether
they are truly feasible targets for opensolaris and zfs.

 That said, ARM is interesting because the chips just recently got a
 lot faster at the same power/price point, like 1GHz.

using zfs on THAT might make more sense :D

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] two pools on boot disk?

2009-06-20 Thread Charles Hedrick
I have a small system that is going to be a file server. It has two disks. I'd 
like just one pool for data. Is it possible to create two pools on the boot 
disk, and then add the second disk to the second pool? The result would be a 
single small pool for root, and a second pool containing the rest of that disk 
plus the second disk? 

The installer seems to want to use the whole disk for the root pool. Is there a 
way to change that?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] how to do backup

2009-06-20 Thread Charles Hedrick
I have a USB disk, to which I want to do a backup. I've used send | receive. It 
works fine until I try to reboot. At that point the system fails to come up 
because the backup copy is set to be mounted at the original location so the 
system tries to mount two different things the same place. I guess I can have 
the script set mountpoint=none, but I'd think there would be a better approach.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to do backup

2009-06-20 Thread James Lever


On 20/06/2009, at 9:55 PM, Charles Hedrick wrote:

I have a USB disk, to which I want to do a backup. I've used send |  
receive. It works fine until I try to reboot. At that point the  
system fails to come up because the backup copy is set to be mounted  
at the original location so the system tries to mount two different  
things the same place. I guess I can have the script set  
mountpoint=none, but I'd think there would be a better approach.


Would a zpool export $backup_pool do the trick?  (and consequently,  
you import the USB zpool before you start your backups?


cheers,
James

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] two pools on boot disk?

2009-06-20 Thread Michael Sullivan

Hi Charles,

Works fine.

I did just that with my home system.  I have 2x .5 TB disks which I  
didn't want to dedicate to rpool, and I wanted to create a second pool  
on those disks which could be expanded.  I set up the rpool to be  
100GB and that left me with a 400GB partition to make into an extended  
pool (xpool).  There are probably some down-sides to doing this, but I  
have yet to come across them at this point.


The reason I did this is to get around the limitation on rpool which  
restricts it to being simple mirrors which cannot be added to in a  
striped configuration.


After that was set up I attached 2x 1 TB disks to the extended pool in  
a mirrored configuration.


Check out my blog entry which explains exactly how to do this.  The  
system I used in the demo is inside VirtualBox, but I have real  
hardware running in the configuration I mention.  Using VirtualBox, I  
worked out the finer bits, before trying it out on my live machine.


http://www.kamiogi.net/Kamiogi/Frame_Dragging/Entries/2009/5/10_OpenSolaris_Disk_Partitioning_and_the_Free_Hog.html

One really interesting bit is how easily it is to make the disk in a  
pool bigger by doing a zpool replace on the device.  It couldn't have  
been any easier with ZFS.


I've even done a fresh install on this configuration just recently,  
and other than being exposed for a bit while I broke the mirrors to  
install a fresh copy of the OS, everything worked out alright.  A few  
snags with namespace collisions when I re-imported the original rpool,  
but I'd already seen those before and wrote about them in another blog  
entry.


If you have any questions, feel free to let me know.

Cheers,

Mike
Mike

---
Michael Sullivan
michael.p.sulli...@me.com
Japan Mobile: +81-80-3202-2599
US Phone: +1-561-283-2034

On 20 Jun 2009, at 20:44 , Charles Hedrick wrote:

I have a small system that is going to be a file server. It has two  
disks. I'd like just one pool for data. Is it possible to create two  
pools on the boot disk, and then add the second disk to the second  
pool? The result would be a single small pool for root, and a second  
pool containing the rest of that disk plus the second disk?


The installer seems to want to use the whole disk for the root pool.  
Is there a way to change that?

--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed drive

2009-06-20 Thread Simon Breden
Great, thanks a lot Jeff.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] APPLE: ZFS need bug corrections instead of new func! Or?

2009-06-20 Thread Bogdan M. Maryniuk
Hi, Miles!
Hope, weather is fine at your place. :-)

On Sat, Jun 20, 2009 at 5:09 AM, Miles Nordin wrote:
 I understood Bogdan's post was a trap: ``provide bug numbers.  Oh,
 they're fixed?  nothing to see here then.  no bugs?  nothing to see
 here then.''

Would be great if you do not put a words in my mouth, please. All what
I wanted to say (not to you, but to everyone, including myself): we
have to be constructive and make common sense (which is not so common,
unfortunately). Otherwise I am not sure we are welcomed here.

 Does this mean ZFS was not broken before those bugs were filed?

Does this mean ZFS has no more bugs? Does this mean that we have stop
using it? Was flame throwing dragons real? Is there a life on a
Mars?.. :) Just kidding, never mind. :-)

 Also, as I said elsewhere, there's a barrier controlled by Sun to
 getting bugs accepted.

Looks like you're new here. :-) E.g. there is a list very nasty bugs
in Sun Java that has been filled in 2006 or earlier and lots of people
suffering (including me) now, in 2009. But hey, not our job to cry and
FUD around, I think.

How about this scenario: either let's find workaround (and provide on
the same bugreport) or, if it is so critical (and Sun rejected it),
let's make a nice PDF with exploit sources or step-by-step instruction
how to crash your system down to italian spaghetti and publish on a
Slashdot :-) to let good guys find the rest how to kill solarises in
two seconds. Then I am 100.0% sure Sun will patch it just right
immediately. It is exaggerated, but still do you like it?

But instead to do this way, somewhat Slashdot folks more just talks
vague blah-blah-blah (mostly being modded insightful: 5 or
interesting: 5, while is a just a troll or FUD) rather then doing
something really useful. I am pretty much sure, if there will be
graphic comparisons with a source code on a Phoronix or similar
resources like FAT32 seriously beats ZFS in stability or How to DoS
your ZFS from Google Android or Linux's ext2 is quince faster than
ZFS — then this would add more adrenaline to Sun's folks fixing it.

However... there are only Slashdot talks that are nothing more than
just a Slashdot talks. I understand you and other Slashdot folks had
some problems. But I hadn't, including lots of other people that ZFS
works for them just fine. Thus it is even/even. :-P

 HTH.

No, it does not. Just yet another e-mail posting that does not really
helps fixing bugs. :-)

 I think a better question would be: what kind of tests would be most
 promising for turning some subclass of these lost pools reported on
 the mailing list into an actionable bug?

 my first bet would be writing tools that test for ignored sync cache
 commands leading to lost writes, and apply them to the case when iSCSI
 targets are rebooted but the initiator isn't.

 I think in the process of writing the tool you'll immediately bump
 into a defect, because you'll realize there is no equivalent of a
 'hard' iSCSI mount like there is in NFS.  and there cannot be a strict
 equivalent to 'hard' mounts in iSCSI, because we want zpool redundancy
 to preserve availability when an iSCSI target goes away.  I think the
 whole model is wrong somehow.

Now this DOES make sense! :-) Actually, iSCSI has lots of various
small issues that grows into serious problems, thus that needs to be
brought up, clearly described and I am sure suggestions are welcome.

If you want to help with stress-tests, then I can help you in this, I
think. For example, here is very nice article of iSCSI setup for Time
Machine. The article is also very nice academic example to let
Slashdot folks learn once how to make sense writing docs, complains
and reports:
http://www.kamiogi.net/Kamiogi/Frame_Dragging/Entries/2009/5/25_OpenSolaris_ZFS_iSCSI_Time_Machine_in_20_Minutes_or_Less.html

So go check it out, follow the steps and make the same. Then write
some scripts that can bring it down, find why, find where is the
problem, suggest solution and publish this in Sun's bugs database. If
you do that — my applauds and respect.

How this sounds to you? :-)

--
Kind regards, BM

Things, that are stupid at the beginning, rarely ends up wisely.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mobo SATA migration to AOC-SAT2-MV8 SATA card

2009-06-20 Thread Simon Breden
OK, thanks again Jeff.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mobo SATA migration to AOC-SAT2-MV8 SATA card

2009-06-20 Thread Simon Breden
OK, that should work then, as my boot drive is currently an old IDE drive, 
which I'm hoping to replace with a SATA SSD.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 resilvering spare taking forever?

2009-06-20 Thread Richard Elling

Also, b57 is about 2 years old and misses the improvements in performance,
especially in scrub performance.
-- richard


Tomas Ögren wrote:

On 19 June, 2009 - Joe Kearney sent me these 3,8K bytes:

  

I've got a Thumper running snv_57 and a large ZFS pool.  I recently
noticed a drive throwing some read errors, so I did the right thing
and zfs replaced it with a spare.



Are you taking snapshots periodically? If so, you're using a build old
enough to restart resilver/scrub whenever a snapshot is taken.

There has also been some bug where 'zpool status' as root restarts
resilver/scrub as well. Try as non-root.

/Tomas
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mobo SATA migration to AOC-SAT2-MV8 SATA card

2009-06-20 Thread Simon Breden
 Working on the assumption that you are going to be adding more drives to
your server, why not just add the new drives to the Supermicro
controller and keep the existing pool (well vdev) where it is?

That's not a bad idea. I just thought that the AOC-SAT2-MV8 has 2 more SATA 
ports than my mobo (M2N-SLI Deluxe), and so it gives me room for another 2 
drives, as I would prefer to keep the *data* zpool running from a single SATA 
controller if possible, to help guarantee uniform  consistent behaviour.

Also, as the AOC-SAT2-MV8 has the same Marvell chipset as the Thumper, so I've 
read, it should be a rock-solid solution.

Having said that, the existing on-motherboard SATA controller is powered by the 
NVidia MCP55 chipset as far as I recall (mobo: NVidia 570 SLI chipset), and I 
chose this as some Sun workstations also used it and so it seemed to be 
supported.

Mirrored BOOT:
As an aside, my idea is to add 2 SSD drives as boot drives so I have a mirrored 
ZFS boot environment (named rpool, right?) within OpenSolaris 2009.06, but I 
need to read up a bit, as this may not even be possible right now, and I might 
need to just use a single SSD as a boot drive now...

My idea would be to keep the boot SSD drive(s) powered by the exsisting 
motherboard SATA controller, and move the existing data pool onto the 
AOC-SAT2-MV8.

AOC-SAT2-MV8 or AOC-USAS-L8i ?:
I have to see if the AOC-SAT2-MV8 will work on the M2N-SLI Deluxe motherboard. 
I know that the mobo doesn't have PCI-X slots so I won't be able to run it at 
full speed, but I seem to remember people saying that it will work in 32-bit 
mode, a bit slower, from a PCI Express 2.0 slot... but I need to check this.

From the supermicro site for the AOC-SAT2-MV8, I see that they don't list 
Solaris in the list of supported OSes, so I suppose the drivers for this 
card/chipset are made by Sun?

One last point is that I've seen some people say that a possibly better card 
than the AOC-SAT2-MV8 is the AOC-USAS-L8i, which uses PCIe. I have to check 
with compatibility with this mobo though as it might not have PCIe. Just 
checked Asus's page for the M2N-SLI Deluxe mobo and it shows that it has the 
following expansion slots:
2 x PCI Express x16 slot at x16, x8 speed 
Support NVIDIA®SLI™ technology (both at x8 mode)
2 x PCI Express x1 
3 x PCI 2.2

So it looks like running the AOC-USAS-L8i in one of the PCI Express x16 slots 
might be a possibility too.

Any tips/advice greatfully received :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool iostat and iostat discrepancy

2009-06-20 Thread tester
Hi,

Does anyone know the difference between zpool iostat  and iostat?


dd if=/dev/zero of=/test/test1/trash count=1 bs=1024k;sync

pool only shows 236K IO and 13 write ops. whereas iostat shows a correctly meg 
of activity.

zpool iostat -v test 5

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
test1.14M   100G  0 13  0   236K
  c8t60060E800475F50075F50525d0   182K  25.0G  0  4  0  
36.8K
  c8t60060E800475F50075F50526d0   428K  25.0G  0  4  0  
87.7K
  c8t60060E800475F50075F50540d0   558K  50.0G  0  4  0   
111K
--  -  -  -  -  -  -

iostat -xnz [devices] 5

   extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
2.46.06.8   88.2  0.0  0.00.01.0   0   0 
c8t60060E800475F50075F50540d0
2.45.46.8   37.0  0.0  0.00.00.9   0   0 
c8t60060E800475F50075F50526d0
2.45.06.8  112.0  0.0  0.00.00.9   0   0 
c8t60060E800475F50075F50525d0

dtrace also concurs with iostat
 
device bytes IOPS
==      
  /devices/scsi_vhci/s...@g60060e800475f50075f50525:a   224416  
 35
  /devices/scsi_vhci/s...@g60060e800475f50075f50526:a   486560  
 37
  /devices/scsi_vhci/s...@g60060e800475f50075f50540:a   608416  
 33

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recover data after zpool create

2009-06-20 Thread stephen bond
Thank you !
This is exactly what I was looking for and although this is zfs (not a Windows 
FAT) the time it takes to create a new pool (instantaneous) means all data is 
still there and only the table of contents was maybe erased. as unix 
directories are files, I suspect even the old structure may be available. it 
just created a new file for the new pool.
will read the zfs docs and report in this thread what I find out.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool iostat and iostat discrepancy

2009-06-20 Thread Neil Perrin

On 06/20/09 11:14, tester wrote:

Hi,

Does anyone know the difference between zpool iostat  and iostat?


dd if=/dev/zero of=/test/test1/trash count=1 bs=1024k;sync

pool only shows 236K IO and 13 write ops. whereas iostat shows a correctly meg 
of activity.


The zfs numbers are per second as well. So 236K * 5 = 1180K
zpool iostat -v test 1 would make this clearer.

The iostat output below also shows 237K (88+37+112) being written per second.
I'm not sure why any reads occurred though. When I did a quick
experiment there were no reads.

Enabling compression gives much better numbers when writing zeros!

Neil.



zpool iostat -v test 5

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
test1.14M   100G  0 13  0   236K
  c8t60060E800475F50075F50525d0   182K  25.0G  0  4  0  
36.8K
  c8t60060E800475F50075F50526d0   428K  25.0G  0  4  0  
87.7K
  c8t60060E800475F50075F50540d0   558K  50.0G  0  4  0   
111K
--  -  -  -  -  -  -

iostat -xnz [devices] 5

   extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
2.46.06.8   88.2  0.0  0.00.01.0   0   0 
c8t60060E800475F50075F50540d0
2.45.46.8   37.0  0.0  0.00.00.9   0   0 
c8t60060E800475F50075F50526d0
2.45.06.8  112.0  0.0  0.00.00.9   0   0 
c8t60060E800475F50075F50525d0

dtrace also concurs with iostat
 
device bytes IOPS

==      
  /devices/scsi_vhci/s...@g60060e800475f50075f50525:a   224416  
 35
  /devices/scsi_vhci/s...@g60060e800475f50075f50526:a   486560  
 37
  /devices/scsi_vhci/s...@g60060e800475f50075f50540:a   608416  
 33

Thanks

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool iostat and iostat discrepancy

2009-06-20 Thread tester
Neil,

Thanks.

That makes sense. May be man page for zpool can say that it is a rate as iostat 
man page does. I think reads are from the zpool iostat command itself. zpool 
iostat doesn't capture that.

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ACL not being inherited correctly

2009-06-20 Thread Andrew Watkins

[I did post this in NFS, but I think it should be here]

I am playing with ACL on snv_114 (and Storage 7110) system and I have 
noticed that strange things are happing to ACL's or am I doing something 
wrong.


When you create a new sub-directory or file the ACL's seem to be incorrect.

# zfs create  rpool/export/home/andrew
# zfs set aclinherit=passthrough   rpool/export/home/andrew
# zfs set aclmode=passthrough   rpool/export/home/andrew

# chown andrew:staff  /export/home/andrew
# chmod A+user:oxygen:rwxpdDaARWcCos:fd-:allow /export/home/andrew

# ls -ldV /export/home/andrew
drwxr-xr-x+  3 andrew   staff  3 Jun 19 17:09 /export/home/andrew
user:oxygen:rwxpdDaARWcCos:fd-:allow
 owner@:--:---:deny
 owner@:rwxp---A-W-Co-:---:allow
 group@:-w-p--:---:deny
 group@:r-x---:---:allow
  everyone@:-w-p---A-W-Co-:---:deny
  everyone@:r-x---a-R-c--s:---:allow

# mkdir /export/home/andrew/foo

# ls -ldV /export/home/andrew/foo
drwxr-xr-x+  2 andrew   staff  2 Jun 19 17:09 
/export/home/andrew/foo

user:oxygen:rwxpdDaARWcCos:fdi---I:allow  Altered
user:oxygen:rwxpdDaARWcCos:--I:allow  NEW
 owner@:--:---:deny
 owner@:rwxp---A-W-Co-:---:allow
 group@:-w-p--:---:deny
 group@:r-x---:---:allow
  everyone@:-w-p---A-W-Co-:---:deny
  everyone@:r-x---a-R-c--s:---:allow


Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] how to destroy a pool by id?

2009-06-20 Thread Kent Watsen





Over the course of multiple OpenSolaris installs , I first created a
pool called "tank" and then, later and resusing some of the same
drives, I created another pool called tank. I can `zpool export tank`,
but when I `zpool import tank`, I get:

bash-3.2# zpool import tank
  cannot import 'tank': more than one matching pool
  import by numeric ID instead


Then, using just `zpool import` I see the IDs:

bash-3.2# zpool import 
   pool: tank
   id: 15608629750614119537
  state: ONLINE
  action: The pool can be imported using its name or numeric
identifier.
  config:
  
   tank ONLINE
   raidz2 ONLINE
   c3t0d0 ONLINE
   c3t4d0 ONLINE
   c4t0d0 ONLINE
   c4t4d0 ONLINE
   c5t0d0 ONLINE
   c5t4d0 ONLINE
   raidz2 ONLINE
   c3t1d0 ONLINE
   c3t5d0 ONLINE
   c4t1d0 ONLINE
   c4t5d0 ONLINE
   c5t1d0 ONLINE
   c5t5d0 ONLINE
  
   pool: tank
   id: 3280066346390919920
  state: ONLINE
  status: The pool was last accessed by another system.
  action: The pool can be imported using its name or numeric
identifier and
   the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-EY
  config:
  
   tank ONLINE
   raidz2 ONLINE
   c4t1d0p0 ONLINE
   c3t1d0p0 ONLINE
   c4t4d0p0 ONLINE
   c3t4d0p0 ONLINE
   c3t5d0p0 ONLINE
   c3t0d0p0 ONLINE


How can I destroy the pool 3280066346390919920 so I do have to
specify the ID to import tank in the future?

Thanks,
kent




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to destroy a pool by id?

2009-06-20 Thread Cindy Swearingen
Hi Kent,

This is what I do in similar situations:

1. Import the pool to be destroyed by using the ID. In your case,
like this:

# zpool import 3280066346390919920

If tank already exists you can also rename it:

# zpool import 3280066346390919920 tank2

Then destroy it:

# zpool destroy tank2

I wish we had a zpool destroy option like this:

# zpool destroy -really_dead tank2

Cindy

- Original Message -
From: Kent Watsen k...@watsen.net
Date: Saturday, June 20, 2009 3:02 pm
Subject: [zfs-discuss] how to destroy a pool by id?
To: zfs-discuss@opensolaris.org

 Over the course of multiple OpenSolaris installs , I first created a
 pool called tank and then, later and resusing some of the same
 drives, I created another pool called tank.  I can `zpool export tank`,
 but when I `zpool import tank`, I get:
 
 
 
 bash-3.2# zpool import tank
 
   cannot import 'tank': more than one matching pool
 
   import by numeric ID instead
 
 
 
 
 Then, using just `zpool import` I see the IDs:
 
 
 
 bash-3.2# zpool import 
 
     pool: tank
 
       id: 15608629750614119537
 
    state: ONLINE
 
   action: The pool can be imported using its name or numeric
 identifier.
 
   config:
 
   
 
       tank    ONLINE
 
     raidz2    ONLINE
 
       c3t0d0  ONLINE
 
       c3t4d0  ONLINE
 
       c4t0d0  ONLINE
 
       c4t4d0  ONLINE
 
       c5t0d0  ONLINE
 
       c5t4d0  ONLINE
 
     raidz2    ONLINE
 
       c3t1d0  ONLINE
 
       c3t5d0  ONLINE
 
       c4t1d0  ONLINE
 
       c4t5d0  ONLINE
 
       c5t1d0  ONLINE
 
       c5t5d0  ONLINE
 
   
 
     pool: tank
 
       id: 3280066346390919920
 
    state: ONLINE
 
   status: The pool was last accessed by another system.
 
   action: The pool can be imported using its name or numeric
 identifier and
 
       the '-f' flag.
 
      see: http://www.sun.com/msg/ZFS-8000-EY
 
   config:
 
   
 
       tank  ONLINE
 
     raidz2  ONLINE
 
       c4t1d0p0  ONLINE
 
       c3t1d0p0  ONLINE
 
       c4t4d0p0  ONLINE
 
       c3t4d0p0  ONLINE
 
       c3t5d0p0  ONLINE
 
       c3t0d0p0  ONLINE
 
 
 
 
 How can I destroy the pool 3280066346390919920 so I do have to
 specify the ID to import tank in the future?
 
 
 
 Thanks,
 
 kent
 
 
 
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ACL not being inherited correctly

2009-06-20 Thread Mark Shellenbaum

Andrew Watkins wrote:

[I did post this in NFS, but I think it should be here]

I am playing with ACL on snv_114 (and Storage 7110) system and I have 
noticed that strange things are happing to ACL's or am I doing something 
wrong.


When you create a new sub-directory or file the ACL's seem to be incorrect.



Its actually doing exactly what its suppose to do.  See below for 
explanation.



# zfs create  rpool/export/home/andrew
# zfs set aclinherit=passthrough   rpool/export/home/andrew
# zfs set aclmode=passthrough   rpool/export/home/andrew

# chown andrew:staff  /export/home/andrew
# chmod A+user:oxygen:rwxpdDaARWcCos:fd-:allow /export/home/andrew

# ls -ldV /export/home/andrew
drwxr-xr-x+  3 andrew   staff  3 Jun 19 17:09 /export/home/andrew
user:oxygen:rwxpdDaARWcCos:fd-:allow
 owner@:--:---:deny
 owner@:rwxp---A-W-Co-:---:allow
 group@:-w-p--:---:deny
 group@:r-x---:---:allow
  everyone@:-w-p---A-W-Co-:---:deny
  everyone@:r-x---a-R-c--s:---:allow

# mkdir /export/home/andrew/foo

# ls -ldV /export/home/andrew/foo
drwxr-xr-x+  2 andrew   staff  2 Jun 19 17:09 
/export/home/andrew/foo

user:oxygen:rwxpdDaARWcCos:fdi---I:allow  Altered


The entry with the inheritance flags of fdi is an inherit only ACE 
which does NOT affect access control and is used for future propagation 
to children of the new directory.


This is done since chmod(2) *may* under some situations alter/reduce the 
permission(s) of ACEs that affect access control.  A chmod(2) operation 
never alters inherit only ACEs.  This then allows future 
directories/files to always inherit the same ACL as its parent, or 
parents parent and so on.




user:oxygen:rwxpdDaARWcCos:--I:allow  NEW


The I indicates the ACE was inherited.  This is the ACE that will used 
during access control.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to destroy a pool by id?

2009-06-20 Thread Andre van Eyssen

On Sat, 20 Jun 2009, Cindy Swearingen wrote:


I wish we had a zpool destroy option like this:

# zpool destroy -really_dead tank2


Cindy,

The moment we implemented such a thing, there would be a rash of requests 
saying:


a) I just destroyed my pool with -really_dead - how can I get my data 
back??!
b) I was able to recover my data from -really_dead - can we have 
-ultra-nuke please?


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss