Re: [zfs-discuss] doubt on solaris 10

2006-12-11 Thread Darren J Moffat

dudekula mastan wrote:
 
  Hi ALL,
   
  Is it possible to install solaris 10 on HP-VISUALIZE XL - CLASS server ?


The ZFS discussion alias is probably not the best place to ask this.

In general they way to find out about Solaris support on a particular
hardware platform is to look at the hardware compatibility list:

http://www.sun.com/bigadmin/hcl/


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Usage in Warehousing (lengthy intro)

2006-12-11 Thread Roch - PAE

Anton B. Rang writes:
  If your database performance is dominated by sequential reads, ZFS may
  not be the best solution from a performance perspective. Because ZFS
  uses a write-anywhere layout, any database table which is being
  updated will quickly become scattered on the disk, so that sequential
  read patterns become random reads. 

While for OLTP our best practice is to set the ZFS
recordsize to match the DB blocksize, for DSS we would
advise to run without such tuning.

True the sequential reads becomes random reads but
of 128K records and that should still allow to draw 
close to 20-25MB/s per [modern] disk.

So to reach your goal of 500MB/s++ you would need 20++ disks.

-r


  
  Of course, ZFS has other benefits, such as ease of use and protection
  from many sources of data corruption; if you want to use ZFS in this
  application, though, I'd expect that you will need substantially more
  raw I/O bandwidth than UFS or QFS (which update in place) would
  require. 
  
  (If you have predictable access patterns to the tables, a QFS setup
  which ties certain tables to particular LUNs using stripe groups might
  work well, as you can guarantee that accesses to one table will not
  interfere with accesses to another.) 
  
  As always, your application is the real test.  ;-)
   
   
  This message posted from opensolaris.org
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: disappearing mount - expected behavior?

2006-12-11 Thread Wes Williams
 What happens is that /home/thomas/zfs gets mounted
 and then the
 automounter starts.  (Or /home/thomas is found
 missing and then
 the zfs mount is not completed)
 
 Probably requires legacy mount point.
 
 
 Casper
 ___

I'm experiencing this same behavior (ZFS NFS mounts don't show after reboot) 
with b50 and was hoping someone could outline a little clearer for those of us 
not as intimately familiar with Solaris innards.  

I can simply issue one 'zfs share zfsdata/[user1]' command and all my other 
ZFS-issued NFS mount points reappear.

My /etc/auto_home is:
[user1] 127.0.0.1:/export/home/[user1]
[user2] 127.0.0.1:/export/home/[user2]
   etc...

All SMF services are running normally - nothing in maintenance or failed to run.

I dot NOT have any entries in my /etc/vfstab pertaining to the /export/home 
directories, all of which are on a ZFS pool.

Should I simply comment out the /etc/auto_home entries?  Or, should I create an 
entry in /etc/vfstab?  If so, is one to /export/home sufficient or must one be 
done for each user [each user has his/her own ZFS pool]?

Sorry, I'm just not sure what a legacy mount point is.

Many thanks in advance!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can't destroy corrupted pool

2006-12-11 Thread Jim Hranicky
Ok, so I'm planning on wiping my test pool that seems to have problems 
with non-spare disks being marked as spares, but I can't destroy it:

# zpool destroy -f zmir
cannot iterate filesystems: I/O error

Anyone know how I can nuke this for good?

Jim
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] weird problem

2006-12-11 Thread James Dickens

on a blade 1500...

bash-3.00# zfs set sharenfs=rw pool
cannot set sharenfs for 'pool': out of space
bash-3.00# zpool iostat pool
  capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
pool49.7G   807M  0  0169 39


bash-3.00# zfs get all pool
NAME PROPERTY   VALUE  SOURCE
pool type   filesystem -
pool creation   Sun Sep 10  1:29 2006  -
pool used   33.1G  -
pool available  0  -
pool referenced 32.4G  -
pool compressratio  1.01x  -
pool mountedyes-
pool quota  none   default
pool reservationnone   default
pool recordsize 128K   default
pool mountpoint /pool  default
pool sharenfs   root=192.168.1.151,rw  local
pool checksum   on default
pool compressionon local
pool atime  on default
pool deviceson default
pool exec   on default
pool setuid on default
pool readonly   offdefault
pool zoned  offdefault
pool snapdirhidden default
pool aclmodegroupmask  default
pool aclinherit secure default

okay i guess the question is why is  zpool iostat pool output is
different from  zfs get all info

James Dickens
uadmin.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hardware planning for storage server

2006-12-11 Thread Bart Smaalders

Jakob Praher wrote:

hi all,

I'd like to build a solid storage server using zfs and opensolaris. The
server more or less should have a NAS role, thus using nfsv4 to export
the data to other nodes.

...
what would be your reasonable advice?



First of all, figure out what you need in terms of capacity and
IOPS/sec.  This will determine the number of spindles, cpus,
network adaptors, etc.

Keep in mind, for large sequential reads and large writes you can
get a significant fraction of the max throughput of the drives;
my 4 x 500 GB RAIDZ configuration does 150 MB/sec pretty consistently.

If you're doing small random reads or writes, you'll be much more
limited by the number of spindles and the way you configure them.

- Bart


--
Bart Smaalders  Solaris Kernel Performance
[EMAIL PROTECTED]   http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Can't destroy corrupted pool

2006-12-11 Thread Jim Hranicky
BTW, I'm also unable to export the pool -- same error.

Jim
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] ZFS problems

2006-12-11 Thread Robert Milkowski
Hello James,

Saturday, November 18, 2006, 11:34:52 AM, you wrote:
JM as far as I can see, your setup does not mee the minimum
JM redundancy requirements for a Raid-Z, which is 3 devices.
JM Since you only have 2 devices you are out on a limb.


Actually only two disks for raid-z is fine and you get redundancy.
However it would make more sense to do mirror with just two disk -
performance would be better and available space would be the same.




-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Can't destroy corrupted pool

2006-12-11 Thread Jim Hranicky
Nevermind:

# zfs destroy [EMAIL PROTECTED]:28
cannot open '[EMAIL PROTECTED]:28': I/O error

Jim
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't destroy corrupted pool

2006-12-11 Thread Eric Schrock
You are likely hitting:

6397052 unmounting datasets should process /etc/mnttab instead of traverse DSL

Which was fixed in build 46 of Nevada.  In the meantime, you can remove
/etc/zfs/zpool.cache manually and reboot, which will remove all your
pools (which you can then re-import on an individual basis).

- Eric

On Mon, Dec 11, 2006 at 06:58:22AM -0800, Jim Hranicky wrote:
 Ok, so I'm planning on wiping my test pool that seems to have problems 
 with non-spare disks being marked as spares, but I can't destroy it:
 
 # zpool destroy -f zmir
 cannot iterate filesystems: I/O error
 
 Anyone know how I can nuke this for good?
 
 Jim
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Can't destroy corrupted pool

2006-12-11 Thread Jim Hranicky
 You are likely hitting:
 
 6397052 unmounting datasets should process
 /etc/mnttab instead of traverse DSL
 
 Which was fixed in build 46 of Nevada.  In the
 meantime, you can remove
 /etc/zfs/zpool.cache manually and reboot, which will
 remove all your
 pools (which you can then re-import on an individual
 basis).

I'm running b51, but I'll try deleting the cache.

Jim
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [nfs-discuss] Re: [zfs-discuss] A Plea for Help: Thumper/ZFS/NFS/B43

2006-12-11 Thread Ben Rockwood

Robert Milkowski wrote:

Hello eric,

Saturday, December 9, 2006, 7:07:49 PM, you wrote:

ek Jim Mauro wrote:
  
Could be NFS synchronous semantics on file create (followed by 
repeated flushing of the write cache).  What kind of storage are you 
using (feel free to send privately if you need to) - is it a thumper? 

It's not clear why NFS-enforced synchronous semantics would induce 
different behavior than the same

load to a local ZFS.
  


ek Actually i forgot he had 'zil_disable' turned on, so it won't matter in
ek this case.


Ben, are you sure zil_disable was set to 1 BEFORE pool was imported?
  


Yes, absolutely.  Set var in /etc/system, reboot, system come up.  That 
happened almost 2 months ago, long before this lock insanity problem 
popped up.


To be clear, the ZIL issue was a problem for creation of a handful of 
files of any size.  Untar'ing a file was a massive performance drain.  
This issue, other the other hand, deals with thousands of little files 
being created all the time (IMAP Locks).  These are separate issues from 
my point of view.  With ZIL slowness NFS performance was just slow but 
we didn't see massive CPU usage, with this issue on the other hand we 
were seeing waves in 10 second-ish cycles where the run queue would go 
sky high with 0 idle.  Please see the earlier mails for examples of the 
symptoms.


benr.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Can't destroy corrupted pool

2006-12-11 Thread Jim Hranicky
This worked. 

I've restarted my testing but I've been fdisking each drive before I 
add it to the pool, and so far the system is behaving as expected
when I spin a drive down, i.e., the hot spare gets automatically used. 
This makes me wonder if it's possible to ensure that the forced
addition of a drive to a pool wipes the pool of any previous data, 
especially any zfs metadata.

I'll keep the list posted as I continue my tests.

Jim
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs exported a live filesystem

2006-12-11 Thread Jim Hranicky
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors. 

Should I file this as a bug, or should I just not do that :-

Ko,
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs exported a live filesystem

2006-12-11 Thread Eric Kustarz

Jim Hranicky wrote:

By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors. 


So you had a pool and were sharing filesystems over NFS, NFS clients had 
active mounts, you removed /etc/zfs/zpool.cache, rebooted the machine, 
created a new pool with the same name (and same filesystem names)?


If so, that's not a bug, the NFS clients have no knowledge that you've 
re-created the pool.  Even though your namespace is the same, the 
filehandles will be different since your filesystems are different (and 
have different FSIDs).


eric



Should I file this as a bug, or should I just not do that :-

Ko,
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs exported a live filesystem

2006-12-11 Thread Richard Elling

Jim Hranicky wrote:

By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors. 


Should I file this as a bug, or should I just not do that :-


Don't do that.  The same should happen if you umount a shared UFS
file system (or any other file system types).
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool mirror

2006-12-11 Thread Matthew Ahrens

Gino Ruopolo wrote:

Hi All,

we have some ZFS pools on production with more than 100s fs and more
than 1000s snapshots on them. Now we do backups with zfs send/receive
with some scripting but I'm searching for a way to mirror each zpool
to an other one for backup purposes (so including all snapshots!). Is
that possible?


Not right now (without a bunch of shell-scripting).  I'm working on 
being able to send a whole tree of filesystems  their snapshots. 
Would that do what you want?


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Uber block corruption?

2006-12-11 Thread Ross Hosman
A while back we had a Sun engineer come to our office and talk about the 
benefits of ZFS. I asked him the question Can the uber block become corrupt 
and what happeneds if it does?, to which he did not have the answer but swore 
to me that he would get it to me. I still haven't gotten that answer and was 
wondering if someone here could enlighten me?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Uber block corruption?

2006-12-11 Thread Jeff Victor

IANA ZFS guru, but I have read explanations like this:

When ZFS reads in the uberblock, it computes the uberblock's checksum and compares 
it against the stored checksum for that block.  If they don't match, it uses 
another copy of the uberblock.


Ross Hosman wrote:

A while back we had a Sun engineer come to our office and talk about the
benefits of ZFS. I asked him the question Can the uber block become corrupt
and what happeneds if it does?, to which he did not have the answer but swore
to me that he would get it to me. I still haven't gotten that answer and was
wondering if someone here could enlighten me?




--
--
Jeff VICTOR  Sun Microsystemsjeff.victor @ sun.com
OS AmbassadorSr. Technical Specialist
Solaris 10 Zones FAQ:http://www.opensolaris.org/os/community/zones/faq
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Uber block corruption?

2006-12-11 Thread Darren Dunham
 A while back we had a Sun engineer come to our office and talk about
 the benefits of ZFS. I asked him the question Can the uber block
 become corrupt and what happeneds if it does?, to which he did not
 have the answer but swore to me that he would get it to me. I still
 haven't gotten that answer and was wondering if someone here could
 enlighten me?

Any data can become corrupt through a variety of processes.

To reduce the chance of it affecting the integrety of the filesystem,
there are multiple copies of the UB written, each with a checksum and a
generation number.  When starting up a pool, the oldest generation copy
that checks properly will be used.  If the import can't find any valid
UB, then it's not going to have access to any data.  Think of a UFS
filesystem where all copies of the superblock are corrupt.

So 'a' UB can become corrupt, but it is unlikely that 'all' UBs will
become corrupt through something that doesn't also make all the data
also corrupt or inaccessible.

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
  This line left intentionally blank to confuse you. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] ZFS related kernel panic

2006-12-11 Thread Robert Milkowski
Hello Richard,

Tuesday, December 5, 2006, 7:01:17 AM, you wrote:

RE Dale Ghent wrote:

 Similar to UFS's onerror mount option, I take it?

RE Actually, it would be interesting to see how many customers change the
RE onerror setting.  We have some data, just need more days in the hour.

Sometimes we do.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [nfs-discuss] Re: [zfs-discuss] A Plea for Help: Thumper/ZFS/NFS/B43

2006-12-11 Thread Robert Milkowski
Hello Ben,

Monday, December 11, 2006, 9:34:18 PM, you wrote:

BR Robert Milkowski wrote:
 Hello eric,

 Saturday, December 9, 2006, 7:07:49 PM, you wrote:

 ek Jim Mauro wrote:
   
 Could be NFS synchronous semantics on file create (followed by 
 repeated flushing of the write cache).  What kind of storage are you 
 using (feel free to send privately if you need to) - is it a thumper? 
 
 It's not clear why NFS-enforced synchronous semantics would induce 
 different behavior than the same
 load to a local ZFS.
   

 ek Actually i forgot he had 'zil_disable' turned on, so it won't matter in
 ek this case.


 Ben, are you sure zil_disable was set to 1 BEFORE pool was imported?
   

BR Yes, absolutely.  Set var in /etc/system, reboot, system come up. That
BR happened almost 2 months ago, long before this lock insanity problem 
BR popped up.

BR To be clear, the ZIL issue was a problem for creation of a handful of 
BR files of any size.  Untar'ing a file was a massive performance drain.
BR This issue, other the other hand, deals with thousands of little files
BR being created all the time (IMAP Locks).  These are separate issues from
BR my point of view.  With ZIL slowness NFS performance was just slow but
BR we didn't see massive CPU usage, with this issue on the other hand we 
BR were seeing waves in 10 second-ish cycles where the run queue would go
BR sky high with 0 idle.  Please see the earlier mails for examples of the
BR symptoms.

Ok.

And just another question - is nfs file system mounted with noac
options on imap server and application is doing chdir() to nfs
directories? I'm not sure if it's fixed - if not your nfs client on
every chdir() will generate lot of small traffic to nfs server
starving it of cpu.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Uber block corruption?

2006-12-11 Thread Robert Milkowski
Hello Darren,

Tuesday, December 12, 2006, 2:10:30 AM, you wrote:

 A while back we had a Sun engineer come to our office and talk about
 the benefits of ZFS. I asked him the question Can the uber block
 become corrupt and what happeneds if it does?, to which he did not
 have the answer but swore to me that he would get it to me. I still
 haven't gotten that answer and was wondering if someone here could
 enlighten me?

DD Any data can become corrupt through a variety of processes.

DD To reduce the chance of it affecting the integrety of the filesystem,
DD there are multiple copies of the UB written, each with a checksum and a
DD generation number.  When starting up a pool, the oldest generation copy
DD that checks properly will be used.  If the import can't find any valid
DD UB, then it's not going to have access to any data.  Think of a UFS
DD filesystem where all copies of the superblock are corrupt.

Actually the latest UB, not the oldest.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[2]: [zfs-discuss] Uber block corruption?

2006-12-11 Thread Darren Dunham
 DD To reduce the chance of it affecting the integrety of the filesystem,
 DD there are multiple copies of the UB written, each with a checksum and a
 DD generation number.  When starting up a pool, the oldest generation copy
 DD that checks properly will be used.  If the import can't find any valid
 DD UB, then it's not going to have access to any data.  Think of a UFS
 DD filesystem where all copies of the superblock are corrupt.
 
 Actually the latest UB, not the oldest.

My *other* oldest...  yeah.

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
  This line left intentionally blank to confuse you. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [nfs-discuss] Re: [zfs-discuss] A Plea for Help: Thumper/ZFS/NFS/B43

2006-12-11 Thread Richard Elling

BR Yes, absolutely.  Set var in /etc/system, reboot, system come up. That
BR happened almost 2 months ago, long before this lock insanity problem
BR popped up.

For the archives, a high level of lock activity can always be a problem.
The worst cases I've experienced were with record locking over NFS.
The worst offenders were programs written with a local file system in
mind, especially PC-based applications.  This has become one of the
first things I check in such instances, and the network traffic will
be consistently full of lock activity.  Just one of those things you
need to watch out for in a networked world :-)
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Doubt on solaris 10 installation ..

2006-12-11 Thread dudekula mastan
Hi Everybody,
   
  I have some problems in solaris 10 installation. 
   
  After installing the first CD ,  I removed the CD from CDrom , after that the 
machine is getting rebooting again and again. It is not asking second CD to 
install.
   
  If you have any idea. Please tell me.
   
  Thanks  Regards
  Masthan

 __
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS related kernel panic

2006-12-11 Thread Richard Elling

Robert Milkowski wrote:

Hello Richard,

Tuesday, December 5, 2006, 7:01:17 AM, you wrote:

RE Dale Ghent wrote:

Similar to UFS's onerror mount option, I take it?


RE Actually, it would be interesting to see how many customers change the
RE onerror setting.  We have some data, just need more days in the hour.

Sometimes we do.


A preliminary look at a sample of the data shows that 1.6% do change this
to something other than the default (panic).  Though this is a statistically
significant sample, it is skewed towards the high-end systems.  A more
detailed study would look at the instances where we had a problem, and the
system did not panic.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs exported a live filesystem

2006-12-11 Thread Boyd Adamson


On 12/12/2006, at 8:48 AM, Richard Elling wrote:


Jim Hranicky wrote:

By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors. Should I file this as a bug, or  
should I just not do that :-


Don't do that.  The same should happen if you umount a shared UFS
file system (or any other file system types).
 -- richard


Except that it doesn't:

# mount /dev/dsk/c1t1d0s0 /mnt
# share /mnt
# umount /mnt
umount: /mnt busy
# unshare /mnt
# umount /mnt
#
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss