Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Carsten Aulbert
Hi all On Thursday 18 March 2010 13:54:52 Joerg Schilling wrote: > If you have no technical issues to discuss, please stop insulting > people/products. > > We are on OpenSolaris and we don't like this kind of discussions on the > mailing lists. Please act collaborative. > May I suggest this to

Re: [zfs-discuss] ZFS on a 11TB HW RAID-5 controller

2010-03-24 Thread Carsten Aulbert
Hi On Wednesday 24 March 2010 17:01:31 Dusan Radovanovic wrote: > connected to P212 controller in RAID-5. Could someone direct me or suggest > what I am doing wrong. Any help is greatly appreciated. > I don't know, but I would get around this like this: My suggestion would be to configure th

[zfs-discuss] ZFS-8000-8A: Able to go back to normal without destroying whole pool?

2010-04-11 Thread Carsten Aulbert
Hi all, on Friday night two disk in one raidz2 vdev decided to die within a couple of minutes. Swapping drives and resilvering one at a time worked quite ok, however, now I'm faced with a nasty problem: s07:~# zpool status -v pool: atlashome state: ONLINE status: One or more d

[zfs-discuss] Find out which of many FS from a zpool is busy?

2010-04-22 Thread Carsten Aulbert
Hi all, sorry if this is in any FAQ - then I've clearly missed it. Is there an easy or at least straight forward way to determine which of n ZFS is currently under heavy NFS load? Once upon a time, when one had old style file systems and exported these as a whole iostat -x came in handy, howev

Re: [zfs-discuss] Find out which of many FS from a zpool is busy?

2010-04-22 Thread Carsten Aulbert
Hi On Thursday 22 April 2010 16:33:51 Peter Tribble wrote: > fsstat? > > Typically along the lines of > > fsstat /tank/* 1 > Sh**, I knew about fsstat but never ever even tried to run it on many file systems at once. D'oh. *sigh* well, at least a good one for the archives... Thanks a lot!

[zfs-discuss] Help needed to find out where the problem is

2009-11-26 Thread Carsten Aulbert
Hi all, on a x4500 with a relatively well patched Sol10u8 # uname -a SunOS s13 5.10 Generic_141445-09 i86pc i386 i86pc I've started a scrub after about 2 weeks of operation and have a lot of checksum errors: s13:~# zpool status pool: atlashome

Re: [zfs-discuss] Help needed to find out where the problem is

2009-11-27 Thread Carsten Aulbert
Hi all, On Thursday 26 November 2009 17:38:42 Cindy Swearingen wrote: > Did anything about this configuration change before the checksum errors > occurred? > No, This machine is running in this configuration for a couple of weeks now > The errors on c1t6d0 are severe enough that your spare kick

Re: [zfs-discuss] Help needed to find out where the problem is

2009-11-27 Thread Carsten Aulbert
Hi Bob On Friday 27 November 2009 17:19:22 Bob Friesenhahn wrote: > > It is interesting that in addition to being in the same vdev, the > disks encountering serious problems are all target 6. Besides > something at the zfs level, there could be some some issue at the > device driver, or underlyi

Re: [zfs-discuss] Help needed to find out where the problem is

2009-11-27 Thread Carsten Aulbert
On Friday 27 November 2009 18:45:36 Carsten Aulbert wrote: I was too fast, now it looks completely different: scrub: resilver completed after 4h3m with 0 errors on Fri Nov 27 18:46:33 2009 [...] s13:~# zpool status pool: atlashome state: DEGRADED status: One

Re: [zfs-discuss] Help needed to find out where the problem is

2009-11-27 Thread Carsten Aulbert
Hi Ross, On Friday 27 November 2009 21:31:52 Ross Walker wrote: > I would plan downtime to physically inspect the cabling. There is not much cabling as the disks are directly connected to a large backplane (Sun Fire X4500) Cheers Carsten ___ zfs-

Re: [zfs-discuss] Help needed to find out where the problem is

2009-11-30 Thread Carsten Aulbert
Hi all, after the disk was exchanged, I ran 'zpool clear' and another zpoo scrub afterwards... and guess what, now another vdev shows similar problems: s13:~# zpool status pool: atlashome state: DEGRADED

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-21 Thread Carsten Aulbert
On Thursday 21 January 2010 10:29:16 Edward Ned Harvey wrote: > > zpool create -f testpool mirror c0t0d0 c1t0d0 mirror c4t0d0 c6t0d0 > > mirror c0t1d0 c1t1d0 mirror c4t1d0 c5t1d0 mirror c6t1d0 c7t1d0 > > mirror c0t2d0 c1t2d0 > > mirror c4t2d0 c5t2d0 mirror c6t2d0 c7t2d0 mirror c0t

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-21 Thread Carsten Aulbert
Hi On Friday 22 January 2010 07:04:06 Brad wrote: > Did you buy the SSDs directly from Sun? I've heard there could possibly be > firmware that's vendor specific for the X25-E. No. So far I've heard that they are not readily available as certification procedures are still underway (apart from

Re: [zfs-discuss] How to grow ZFS on growing pool?

2010-02-02 Thread Carsten Aulbert
Hi Jörg, On Tuesday 02 February 2010 16:40:50 Joerg Schilling wrote: > After that, the zpool did notice that there is more space: > > zpool list > NAME SIZE USED AVAILCAP HEALTH ALTROOT > test 476M 1,28M 475M 0% ONLINE - > That's the size already after the initial creation

[zfs-discuss] zfs/sol10u8 less stable than in sol10u5?

2010-02-04 Thread Carsten Aulbert
Hi all, it might not be a ZFS issue (and thus on the wrong list), but maybe there's someone here who might be able to give us a good hint: We are operating 13 x4500 and started to play with non-Sun blessed SSDs in there. As we were running Solaris 10u5 before and wanted to use them as log devi

Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-16 Thread Carsten Aulbert
On Sunday 15 August 2010 11:56:22 Joerg Moellenkamp wrote: > And by the way: Wasn't there a > comment of Linus Torvals recently that people shound move their > low-quality code into the codebase ??? ;) Yeah, those codes should be put into the "staging" part of the codebase, so that (more) peo

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-06 Thread Carsten Aulbert
Hi On Monday 06 September 2010 17:53:44 hatish wrote: > Im setting up a server with 20x1TB disks. Initially I had thought to setup > the disks using 2 RaidZ2 groups of 10 discs. However, I have just read the > Best Practices guide, and it says your group shouldnt have > 9 disks. So > Im thinking a

Re: [zfs-discuss] resilver that never finishes

2010-09-17 Thread Carsten Aulbert
Hi all one of our system just developed something remotely similar: s06:~# zpool status pool: atlashome state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to comple

Re: [zfs-discuss] resilver that never finishes

2010-09-18 Thread Carsten Aulbert
Hi On Saturday 18 September 2010 10:02:42 Ian Collins wrote: > > I see this all the time on a troublesome Thumper. I believe this > happens because the data in the pool is continuously changing. Ah ok, that may be, there is one particular active user on this box right now. Interesting I've nev

Re: [zfs-discuss] Resilver/scrub times?

2010-12-20 Thread Carsten Aulbert
Hi On Sunday 19 December 2010 11:12:32 Tobias Lauridsen wrote: > sorry to bring the old one up, but I think it is better than make a new one > ?? Are there some one who have some resilver time from a raidz1/2 pool > whith 5TB+ data on it ? if you just looked into the discussion over the past day

[zfs-discuss] file system under heavy load, how to find out what the cause is?

2011-09-15 Thread Carsten Aulbert
Demand Metadata: 1%16042 Prefetch Metadata: 1%15290 - Has nayone any idea what's going on here? Cheers carsten -- Dr. Carsten Aulbert - Max Planck Institute for Gravitational Physics Callinstras

[zfs-discuss] resilvering becoming slower the more recent the OS is?

2011-09-15 Thread Carsten Aulbert
slow. Is there some way to speed it up? Cheers Carsten -- Dr. Carsten Aulbert - Max Planck Institute for Gravitational Physics Callinstrasse 38, 30167 Hannover, Germany Phone/Fax: +49 511 762-17185 / -17193 http://www.top500.org/system/9234 | http://www.top500.org/connfam/6 CaCert Assurer | Get

[zfs-discuss] zpool replace is stuck

2008-10-10 Thread Carsten Aulbert
brief hint! Cheers Carsten -- Dr. Carsten Aulbert - Max Planck Institute for Gravitational Physics Callinstrasse 38, 30167 Hannover, Germany Phone/Fax: +49 511 762-17185 / -17193 http://www.top500.org/system/9234 | http://www.top500.org/connfam/6/list/31 ___

[zfs-discuss] Improving zfs send performance

2008-10-13 Thread Carsten Aulbert
know why * zfs send is so slow and * how can I improve the speed? Thanks a lot for any hint Cheers Carsten [*] we have some quite a few tests with more zpools but were not able to improve the speeds substantially. For this particular bad file system I still need to histogram the file sizes.

Re: [zfs-discuss] Improving zfs send performance

2008-10-13 Thread Carsten Aulbert
Hi Darren J Moffat wrote: > > What are you using to transfer the data over the network ? > Initially just plain ssh which was way to slow, now we use mbuffer on both ends and socket transfer the data over via socat - I know that mbuffer already allows this, but in a few tests socat seemed to b

Re: [zfs-discuss] Improving zfs send performance

2008-10-13 Thread Carsten Aulbert
Hi Thomas, Thomas Maier-Komor wrote: > > Carsten, > > the summary looks like you are using mbuffer. Can you elaborate on what > options you are passing to mbuffer? Maybe changing the blocksize to be > consistent with the recordsize of the zpool could improve performance. > Is the buffer running

Re: [zfs-discuss] Improving zfs send performance

2008-10-14 Thread Carsten Aulbert
Hi again, Thomas Maier-Komor wrote: > Carsten Aulbert schrieb: >> Hi Thomas, > I don't know socat or what benefit it gives you, but have you tried > using mbuffer to send and receive directly (options -I and -O)? I thought we tried that in the past and with socat it seeme

Re: [zfs-discuss] Improving zfs send performance

2008-10-15 Thread Carsten Aulbert
Hi all, Carsten Aulbert wrote: > More later. OK, I'm completely puzzled right now (and sorry for this lengthy email). My first (and currently only idea) was that the size of the files is related to this effect, but that does not seem to be the case: (1) A 185 GB zfs file system was tra

Re: [zfs-discuss] Improving zfs send performance

2008-10-15 Thread Carsten Aulbert
Hi Ross Ross Smith wrote: > Thanks, that got it working. I'm still only getting 10MB/s, so it's not > solved my problem - I've still got a bottleneck somewhere, but mbuffer is a > huge improvement over standard zfs send / receive. It makes such a > difference when you can actually see what's

Re: [zfs-discuss] Improving zfs send performance

2008-10-15 Thread Carsten Aulbert
Hi Richard, Richard Elling wrote: > Since you are reading, it depends on where the data was written. > Remember, ZFS dynamic striping != RAID-0. > I would expect something like this if the pool was expanded at some > point in time. No, the RAID was set-up in one go right after jumpstarting the bo

Re: [zfs-discuss] Improving zfs send performance

2008-10-15 Thread Carsten Aulbert
Hi again Brent Jones wrote: > > Scott, > > Can you tell us the configuration that you're using that is working for you? > Were you using RaidZ, or RaidZ2? I'm wondering what the "sweetspot" is > to get a good compromise in vdevs and usable space/performance > Some time ago I made some tests to

Re: [zfs-discuss] Improving zfs send performance

2008-10-16 Thread Carsten Aulbert
Hi Scott, Scott Williamson wrote: > You seem to be using dd for write testing. In my testing I noted that > there was a large difference in write speed between using dd to write > from /dev/zero and using other files. Writing from /dev/zero always > seemed to be fast, reaching the maximum of ~200M

Re: [zfs-discuss] Improving zfs send performance

2008-10-16 Thread Carsten Aulbert
Hi Ross Ross wrote: > Now though I don't think it's network at all. The end result from that > thread is that we can't see any errors in the network setup, and using > nicstat and NFS I can show that the server is capable of 50-60MB/s over the > gigabit link. Nicstat also shows clearly that b

Re: [zfs-discuss] Improving zfs send performance

2008-10-18 Thread Carsten Aulbert
Hi Miles Nordin wrote: >> "r" == Ross <[EMAIL PROTECTED]> writes: > > r> figures so close to 10MB/s. All three servers are running > r> full duplex gigabit though > > there is one tricky way 100Mbit/s could still bite you, but it's > probably not happening to you. It mostly affe

[zfs-discuss] zfs snapshot stalled?

2008-10-19 Thread Carsten Aulbert
g? I.e. is there a special signal I could send it? Thanks for any hint Carsten -- Dr. Carsten Aulbert - Max Planck Institute for Gravitational Physics Callinstrasse 38, 30167 Hannover, Germany Phone/Fax: +49 511 762-17185 / -17193 http://www.top500.org/system/9234 | http://www.top500.org/connfam

Re: [zfs-discuss] zfs snapshot stalled?

2008-10-20 Thread Carsten Aulbert
Hi again, brief update: the process ended successfully (at least a snapshot was created) after close to 2 hrs. Since the load is still the same as before taking the snapshot I blame other users' processes reading from that array for the long snapshot duration. Carsten Aulbert wrote:

[zfs-discuss] Asymmetric zpool load

2008-12-02 Thread Carsten Aulbert
Hi all, We are running pretty large vdevs since the initial testing showed that our setup was not too much off the optimum. However, under real world load we do see quite some weird behaviour: The system itself is a X4500 with 500 GB drives and right now the system seems to be under heavy load, e

Re: [zfs-discuss] Asymmetric zpool load

2008-12-02 Thread Carsten Aulbert
Hi Miles, Miles Nordin wrote: >>>>>> "ca" == Carsten Aulbert <[EMAIL PROTECTED]> writes: > > ca> (a) Why the first vdev does not get an equal share > ca> of the load > > I don't know. but, if you don't add all the vdev&#

Re: [zfs-discuss] Asymmetric zpool load

2008-12-02 Thread Carsten Aulbert
Bob Friesenhahn wrote: > You may have one or more "slow" disk drives which slow down the whole > vdev due to long wait times. If you can identify those slow disk drives > and replace them, then overall performance is likely to improve. > > The problem is that under severe load, the vdev with the

Re: [zfs-discuss] Asymmetric zpool load

2008-12-03 Thread Carsten Aulbert
Ross wrote: > Aha, found it! It was this thread, also started by Carsten :) > http://www.opensolaris.org/jive/thread.jspa?threadID=78921&tstart=45 Did I? Darn, I need to get a brain upgrade. But yes, there it was mainly focused on zfs send/receive being slow - but maybe these are also linked. W

Re: [zfs-discuss] Asymmetric zpool load

2008-12-03 Thread Carsten Aulbert
Carsten Aulbert wrote: > Put some stress on the system with bonnie and other tools and try to > find slow disks and see if this could be the main problem but also look > into more vdevs and then possible move to raidz to somehow compensate > for lost disk space. Since we have 4 cold s

Re: [zfs-discuss] SMART data

2008-12-06 Thread Carsten Aulbert
Hi Joe, Joe S wrote: > How do I get SMART data from my drives? > > I'm running snv_101 on AMD64. > > I have 6x SATA disks. I guess that highly depends how these are connected. If they are not 'hidden' behind a RAID-controller, you might have success with http://smartmontools.sourceforge.net/do

Re: [zfs-discuss] SMART data

2008-12-08 Thread Carsten Aulbert
Hi all, Miles Nordin wrote: >> "rl" == Rob Logan <[EMAIL PROTECTED]> writes: > > rl> the sata framework uses the sd driver so its: > > yes but this is a really tiny and basically useless amount of output > compared to what smartctl gives on Linux with SATA disks, where SATA > disks also

Re: [zfs-discuss] SMART data

2008-12-21 Thread Carsten Aulbert
Mam Ruoc wrote: >> Carsten wrote: >> I will ask my boss about this (since he is the one >> mentioned in the >> copyright line of smartctl ;)), please stay tuned. > > How is this going? I'm very interested too... Not much happening right now, December meetings, holiday season, ... But thanks f

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2008-12-28 Thread Carsten Aulbert
Hi all, Bob Friesenhahn wrote: > My understanding is that ordinary HW raid does not check data > correctness. If the hardware reports failure to successfully read a > block, then a simple algorithm is used to (hopefully) re-create the > lost data based on data from other disks. The difference

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2008-12-28 Thread Carsten Aulbert
Hi Bob, Bob Friesenhahn wrote: >> AFAIK this is not done during the normal operation (unless a disk asked >> for a sector cannot get this sector). > > ZFS checksum validates all returned data. Are you saying that this fact > is incorrect? > No sorry, too long in front of a computer today I gu

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2008-12-30 Thread Carsten Aulbert
Hi Marc, Marc Bevand wrote: > Carsten Aulbert aei.mpg.de> writes: >> In RAID6 you have redundant parity, thus the controller can find out >> if the parity was correct or not. At least I think that to be true >> for Areca controllers :) > > Are you sure about that ?

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-01 Thread Carsten Aulbert
Hi Marc (and all the others), Marc Bevand wrote: > So Carsten: Mattias is right, you did not simulate a silent data corruption > error. hdparm --make-bad-sector just introduces a regular media error that > *any* RAID level can detect and fix. OK, I'll need to go back to our tests performed mon

Re: [zfs-discuss] ZFS send fails incremental snapshot

2009-01-04 Thread Carsten Aulbert
Hi Brent, Brent Jones wrote: > I am using 2008.11 with the Timeslider automatic snapshots, and using > it to automatically send snapshots to a remote host every 15 minutes. > Both sides are X4540's, with the remote filesystem mounted read-only > as I read earlier that would cause problems. > The s

Re: [zfs-discuss] 'zfs recv' is very slow

2009-01-06 Thread Carsten Aulbert
Hi, Brent Jones wrote: > > Using mbuffer can speed it up dramatically, but this seems like a hack > without addressing a real problem with zfs send/recv. > Trying to send any meaningful sized snapshots from say an X4540 takes > up to 24 hours, for as little as 300GB changerate. I have not found

Re: [zfs-discuss] hung when import zpool

2009-01-08 Thread Carsten Aulbert
Hi Qin Ming Hua wrote: > bash-3.00# zpool import mypool > ^C^C > > it hung when i try to re-import the zpool, has anyone see this before? > How long did you wait? Once a zfs import took 1-2 hours to complete (it was seemingly stuck at a ~30 GB filesystem which it needed to do some work on).

[zfs-discuss] Benchmarking ZFS via NFS

2009-01-08 Thread Carsten Aulbert
Hi all, among many other things I recently restarted benchmarking ZFS over NFS3 performance between X4500 (host) and Linux clients. I've just iozone quite a while ago and am still a bit at a loss understanding the results. The automatic mode is pretty ok (and generates nice 3D plots for the people

Re: [zfs-discuss] Benchmarking ZFS via NFS

2009-01-08 Thread Carsten Aulbert
Hi Bob. Bob Friesenhahn wrote: >> Here is the current example - can anyone with deeper knowledge tell me >> if these are reasonable values to start with? > > Everything depends on what you are planning do with your NFS access. For > example, the default blocksize for zfs is 128K. My example test

Re: [zfs-discuss] checksum errors on Sun Fire X4500

2009-01-22 Thread Carsten Aulbert
Hi Jay, Jay Anderson schrieb: > I have b105 running on a Sun Fire X4500, and I am constantly seeing checksum > errors reported by zpool status. The errors are showing up over time on every > disk in the pool. In normal operation there might be errors on two or three > disks each day, and someti

[zfs-discuss] Expert hint for replacing 3.5" SATA drive in X4500 with SSD for ZIL

2009-02-02 Thread Carsten Aulbert
Hi all, We would like to replace one of our 3.5 inch SATA drives of our Thumpers with a SSD device (and put the ZIL on this device). We are currently looking into this with in a bit more detail and would like to ask for input if people already have experience with single vs. multi cell SSDs, read-

Re: [zfs-discuss] Bad sectors arises -> discs differ in size -> trouble?

2009-02-02 Thread Carsten Aulbert
Hi Eric D. Mudama schrieb: > Short of SMART, I am not sure. If SMART isn't supported, someone > should port support for it. > I'm not sure if Sun's hd tool is working everywhere of if it specific to certain machines: hd -e c5t6 Revision: 16 Offline status 130 Selftest status 0 Seconds to colle

Re: [zfs-discuss] Expert hint for replacing 3.5" SATA drive in X4500 with SSD for ZIL

2009-02-02 Thread Carsten Aulbert
Just a brief addendum Something like this (or a fully DRAM based device if available in 3.5 inch FF) might also be interesting to test, http://www.platinumhdd.com/ any thoughts? Cheers Carsten ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Introducing zilstat

2009-02-04 Thread Carsten Aulbert
Hi Richard, Richard Elling schrieb: > > Yes. I've got a few more columns in mind, too. Does anyone still use > a VT100? :-) Only when using ILOM ;) (anyone using 72 char/line MUA, sorry to them, the following lines are longer): Thanks for the great tool, it showed something very interesting

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-10 Thread Carsten Aulbert
Hi, i've followed this thread a bit and I think there are some correct points on any side of the discussion, but here I see a misconception (at least I think it is): D. Eckert schrieb: > (..) > Dave made a mistake pulling out the drives with out exporting them first. > For sure also UFS/XFS/EXT4/

[zfs-discuss] seeking in ZFS when data is compressed

2009-03-16 Thread Carsten Aulbert
Hi all, I was just reading http://blogs.sun.com/dap/entry/zfs_compression and would like to know what the experience of people is about enabling compression in ZFS. In principle I don't think it's a bad thing, especially not when the CPUs are fast enough to improve the performance as the hard dr

Re: [zfs-discuss] seeking in ZFS when data is compressed

2009-03-16 Thread Carsten Aulbert
Hi Richard, Richard Elling wrote: > > Files are not compressed in ZFS. Blocks are compressed. Sorry, yes, I was not specific enough. > > If the compression of the blocks cannot gain more than 12.5% space savings, > then the block will not be compressed. If your file contains > compressable p

Re: [zfs-discuss] seeking in ZFS when data is compressed

2009-03-16 Thread Carsten Aulbert
Darren, Richard, thanks a lot for the very good answers. Regarding the seeking I was probably mislead by the believe that the block size was like an impenetrable block where as much data as possible is being squeezed into (like .Z files would be if you first compressed and then cut the data into b

Re: [zfs-discuss] How do I "mirror" zfs rpool, x4500?

2009-03-18 Thread Carsten Aulbert
Hi Tim, Tim wrote: > > How does any of that affect an x4500 with onboard controllers that can't > ever be moved? Well, consider one box being installed from CD (external USB-CD) and another one which is jumpstarted via the network. The results usually are two different boot device names :( Q: I

[zfs-discuss] zpool import: Cannot mount,

2009-06-29 Thread Carsten Aulbert
Hi, I've browsed the archives but there soes not seem to be a nice solution to this one (happening on a Solaris 10u5 production machine): zpool export atlashome zpool import atlashome cannot mount '/atlashome/BACKUP': directory is not empty (from old emails I gathered that the output of zfs list

Re: [zfs-discuss] zpool import: Cannot mount,

2009-06-29 Thread Carsten Aulbert
Hi a small addendum. It seems that all sub ZFS below /atlashome/BACKUP are already mounted when /atlashome/BACKUP is tried to be mounted: # zfs get all atlashome/BACKUP|head -15 NAME PROPERTY VALUE SOURCE atlashome/BACKUP type filesystem

Re: [zfs-discuss] zpool import: Cannot mount,

2009-06-29 Thread Carsten Aulbert
Hi Mark J Musante wrote: > > Do a zpool export first, and then check to see what's in /atlashome. My > bet is that the BACKUP directory is still there. If so, do an rmdir on > /atlashome/BACKUP and then try the import again. Sorry, I meant to copy this earlier: s11 console login: root Passwor

Re: [zfs-discuss] zpool import: Cannot mount,

2009-06-29 Thread Carsten Aulbert
Hi Mark, Mark J Musante wrote: > > OK, looks like you're running into CR 6827199. > > There's a workaround for that as well. After the zpool import, manually > zfs umount all the datasets under /atlashome/BACKUP. Once you've done > that, the BACKUP directory will still be there. Manually moun

Re: [zfs-discuss] how to discover disks?

2009-07-05 Thread Carsten Aulbert
Hi Hua-Ying Ling wrote: > How do I discover the disk name to use for zfs commands such as: > c3d0s0? I tried using format command but it only gave me the first 4 > letters: c3d1. Also why do some command accept only 4 letter disk > names and others require 6 letters? Usually i find cfgadm -a