[zfs-discuss] panic on `zfs export` with UNAVAIL disks

2008-06-04 Thread Tobias Hintze
hi list,

initial situation:

SunOS alusol 5.11 snv_86 i86pc i386 i86pc
SunOS Release 5.11 Version snv_86 64-bit

3 USB HDDs on 1 USB hub:

zpool status:

 state: ONLINE
NAME   STATE READ WRITE CKSUM
usbpoolONLINE   0 0 0
  mirror   ONLINE   0 0 0
c7t0d0p0   ONLINE   0 0 0
c8t0d0p0   ONLINE   0 0 0
  c9t0d0p0 ONLINE   0 0 0

i shut the machine down and replugged the USB HDDs to not use the USB
hub but to connect directly to the machine.

after booting the machine i have the following zpool status:

 state: DEGRADED
NAME  STATE READ WRITE CKSUM
usbpool   DEGRADED 0 0 0
  mirror  UNAVAIL  0 0 0  insufficient replicas
c8t0d0p0  UNAVAIL  0 0 0  cannot open
c7t0d0p0  UNAVAIL  0 0 0  cannot open
  c10t0d0p0   ONLINE   0 0 0

i should propably have done a zfs export before the replugging, but i
expected that zfs would somehow be able to find the disks.


to fix the problem i tried to `zfs export` from this status.
this resulted in the kernel panic:
==
Jun  4 11:31:47 alusol unix: [ID 836849 kern.notice]
Jun  4 11:31:47 alusol ^Mpanic[cpu0]/thread=ff0004b1ac80:
Jun  4 11:31:47 alusol genunix: [ID 603766 kern.notice] assertion failed: 
vdev_config_sync(rvd-vdev_child, rvd-vdev_children, txg) == 0 (0x5 == 0x0), 
file: ../../common/fs/zfs/spa.c, line: 4095
Jun  4 11:31:47 alusol unix: [ID 10 kern.notice]
Jun  4 11:31:47 alusol genunix: [ID 655072 kern.notice] ff0004b1ab30 
genunix:assfail3+b9 ()
Jun  4 11:31:47 alusol genunix: [ID 655072 kern.notice] ff0004b1abd0 
zfs:spa_sync+5d2 ()
Jun  4 11:31:47 alusol genunix: [ID 655072 kern.notice] ff0004b1ac60 
zfs:txg_sync_thread+19a ()
Jun  4 11:31:47 alusol genunix: [ID 655072 kern.notice] ff0004b1ac70 
unix:thread_start+8 ()
==

after being back up i could do `zfs import` and have the pool back:

 state: ONLINE
NAME   STATE READ WRITE CKSUM
usbpoolONLINE   0 0 0
  mirror   ONLINE   0 0 0
c12t0d0p0  ONLINE   0 0 0
c13t0d0p0  ONLINE   0 0 0
  c10t0d0p0ONLINE   0 0 0
errors: 4 data errors, use '-v' for a list

with the 4 errors:

errors: Permanent errors have been detected in the following files:
metadata:0x3
metadata:0xa
metadata:0x13
metadata:0x14



so the questions:

1) was there something else i could have done instead of the
   `zfs export` + panic + `zfs import` from that situation
   (after replugging and booting)?

2) how do i get rid of the metadata errors?


thanks,
-- 
Tobias Hintze // threllis GmbH
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2008-06-04 Thread stefano audenino
 Hi, I know there is no single-command way to shrink a
 zpool (say evacuate the data from a disk and then
 remove the disk from a pool), but is there a logical
 way? I.e mirrror the pool to a smaller pool and then
 split the mirror?  In this case I'm not talking about
 disk size (moving from 4 X 72GB disks to 4 X 36GB
 disks) but rather moving from 4 X 73GB disks to 3 X
 73GB disks assuming the pool would fit on 3X. I
 haven't found a way but thought I might be missing
 something. Thanks.

Hi.
You might try the following:

1) Create the new pool ( your 3 X 73GB disks pool ).

2) Create a snapshot of the main filesystem on your 4 disks pool:
zfs snapshot -r [EMAIL PROTECTED]

3) Copy the snapshot to the new 3 disks pool:
zfs sent [EMAIL PROTECTED] | zfs recv newpool/tmp
cd newppol/tmp
mv * ../

4) Replace the old pool with the new one:
zpool destroy poolname ( do a backup before this )
zpool import newpool poolname

This worked for me, but test it before using it on production pools.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] new install - when is zfs root offered? (snv_90)

2008-06-04 Thread Richard L. Hamilton
 On Tue, Jun 03, 2008 at 05:56:44PM -0700, Richard L.
 Hamilton wrote:
  How about SPARC - can it do zfs install+root yet,
 or if not, when?
  Just got a couple of nice 1TB SAS drives, and I
 think I'd prefer to
  have a mirrored pool where zfs owns the entire
 drives, if possible.
  (I'd also eventually like to have multiple bootable
 zfs filesystems in
  that pool, corresponding to multiple versions.)
 
 Is they just under 1TB?  I don't believe there's any
 boot support in
 Solaris for EFI labels, which would be required for
 1TB+.

Don't know about Solaris or the on-disk bootloader (I would think they
ought to have that eventually if not already), but since it's been awhile
since I've seen a new firmware update for the SB2K, I doubt the firmware
could handle EFI labels.

But format is perfectly happy putting either Sun or EFI labels
on these drives, so that shouldn't be a problem.   SCSI read capacity
shows 1953525168 (512-byte) sectors, which multiplied out is
1,000,204,886,016 bytes; more than 10^12 (1TB), but less than 2^40 (1TiB).
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] new install - when is zfs root offered? (snv_90)

2008-06-04 Thread Richard L. Hamilton
P.S. the ST31000640SS drives, together with the LSI SAS 3800x
controller (in a 64-bit 66MHz slot) gave me, using dd with
a block size of either 1024k or 16384k (1MB or 16MB) and a count
of 1024, a sustained read rate that worked out to a shade over 119MB/s,
even better than the nominal sustained transfer rate of 116MB/s documented
for the drives.  Even at a miserly 7200 RPM, that was better than 2 1/2 times
faster than the internal 10,000 RPM 73GB FC/AL (2GB/s) drives, which impressed
the heck out of me.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-04 Thread Darryl
Is there a consensus that seagate barracuda drives are worthwhile, stable, 
etc...?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] panic on `zfs export` with UNAVAIL disks

2008-06-04 Thread Tobias Hintze
On Wed, Jun 04, 2008 at 02:12:10PM +0200, Tobias Hintze wrote:
[...]
 
 2) how do i get rid of the metadata errors?
 

scrubbing fixed the metadata errors.

th
--

 
 thanks,
 -- 
 Tobias Hintze // threllis GmbH

-- 
Tobias Hintze // threllis GmbH
http://threllis.de/impressum/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-04 Thread Luke Scharf
Darryl wrote:
 Is there a consensus that seagate barracuda drives are worthwhile, stable, 
 etc...?
   

All drives suck.  Use RAID.  :-)

-Luke

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-04 Thread Darryl
good thing i asked about the seagates...  i always thought they were highly 
regarded.  I know that everyone has their own opinions, but this is still quite 
informative.  I've always been partial to seagate and WD, for not other reason 
than, thats all i've had :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can't rm file when No space left on device...

2008-06-04 Thread Brad Diggs
Hello,

A customer recently brought to my attention that ZFS can get
into a situation where the filesystem is full but no files 
can be removed.  The workaround is to remove a snapshot and
then you should have enough free space to remove a file.  
Here is a sample series of commands to reproduce the 
problem.

# mkfile 1g /tmp/disk.raw
# zpool create -f zFullPool /tmp/disk2.raw
# sz=`df -k /zFullPool | awk '{ print $2 }' | tail -1`
# mkfile $((${sz}-1024))k /zFullPool/f1
# zfs snapshot [EMAIL PROTECTED]
# sz=`df -k /zFullPool | awk '{ print $2 }' | tail -1`
# mkfile ${sz}k /zFullPool/f2
/zFullPool/f2: initialized 401408 of 1031798784 bytes: No space left on
device
# df -k /zFullPool
Filesystemkbytesused   avail capacity  Mounted on
zFullPool1007659 1007403   0   100%/zFullPool
# rm -f /zFullPool/f1
# ls -al /zFullPool
total 2014797
drwxr-xr-x   2 root sys4 Jun  4 12:15 .
drwxr-xr-x  31 root root   18432 Jun  4 12:14 ..
-rw--T   1 root root 1030750208 Jun  4 12:15 f1
-rw---   1 root root 1031798784 Jun  4 12:15 f2
# rm -f /zFullPool/f2
# ls -al /zFullPool
total 2014797
drwxr-xr-x   2 root sys4 Jun  4 12:15 .
drwxr-xr-x  31 root root   18432 Jun  4 12:14 ..
-rw--T   1 root root 1030750208 Jun  4 12:15 f1
-rw---   1 root root 1031798784 Jun  4 12:15 f2

At this point, the only way in which I can free up sufficient
space to remove either file is to first remove the snapshot.

# zfs destroy [EMAIL PROTECTED]
# rm -f /zFullPool/f1
# ls -al /zFullPool
total 1332
drwxr-xr-x   2 root sys3 Jun  4 12:17 .
drwxr-xr-x  31 root root   18432 Jun  4 12:14 ..
-rw---   1 root root 1031798784 Jun  4 12:15 f2

Is there an existing bug on this that is going to address
enabling the removal of a file without the pre-requisite 
removal of a snapshot?

Thanks in advance,
Brad
-- 
-
  _/_/_/  _/_/  _/ _/   Brad Diggs
 _/  _/_/  _/_/   _/Communications Area Market
_/_/_/  _/_/  _/  _/ _/ Senior Directory Architect
   _/  _/_/  _/   _/_/
  _/_/_/   _/_/_/   _/ _/   Office:  972-992-0002
E-Mail:  [EMAIL PROTECTED]
 M  I  C  R  O  S  Y  S  T  E  M  S

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk names?

2008-06-04 Thread Benjamin Ellison
I dropped logicsupply a note asking if they had any specifics on the backplane 
used in my case.  My guess is that it just isn't compatible with opensolaris -- 
I have disks in the bays, but they show up empty in the OS.  If I connect one 
of the drives directly to the mobo it shows up fine.

Chances are fairly good I'll just attempt to remove the backplane, get some 
molex-to-sata power adapters, and directly cable the drives to the motherboard 
until such a time as the backplane is supported.   I will also need to figure 
out a good non-ugly and non-permanent way to keep the nifty hot-swap bays from 
being removed accidentally.

--Ben
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-04 Thread Keith Bierman

On Jun 4, 2008, at 10:47 AM, Bill Sommerfeld wrote:


 On Wed, 2008-06-04 at 11:52 -0400, Bill McGonigle wrote:
 but we got one server in
 where 4 of the 8 drives failed in the first two months, at which
 point we called Seagate and they were happy to swap out all 8 drives
 for us.   I suspect a bad lot, and even found some other complaints
 about the lot on Google.

 Problems like that seem to pop up with disturbing regularity, and have
 done so for decades.  (Anyone else remember the DEC RA81 glue  
 problem in
 around 1985-1986?)

 I've thought for some time that a good way to defend against the bad
 lot problem (if you can manage it) is to buy half of your disks from
 each of two manufacturers and then set up mirror pairs containing one
 disk of each model...




I think it's a bit more common to arrange to have same vendor parts,  
but different lots. There are still lots of potential correlations,  
shared chassis, shared power, etc. mean shared vibration, shared  
environment, and shared human error sources ;



-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] any 64-bit mini-itx successes

2008-06-04 Thread Benjamin Ellison
Update:
The BCME drivers  solution from this thread worked, I think:
http://opensolaris.org/jive/thread.jspa?messageID=195224
At least, it says the broadcom is up -- I just don't have it in a location 
where I can get an IP and outside access to test it, yet.

I had some intermittent video issues, which Logicsupply.com tech support 
suggested were caused by RAM.  I ran a memory test from a bootable disc ( 
http://www.memtest.org/ ) which showed the chips as good, so I relocating the 
ram chip to a different slot.  Problem fixed :)

The round cables from firefold.com came in, and work *much* better in the case, 
especially once I trimmed off the rubber boots.  Of course it was only after I 
went through this headache that I found logicsupply's review of the case and 
how they installed the mobo.  

My current problem is attempting to get opensolaris to recognize my storage 
drives, some of which can be followed in another thread: 
http://www.opensolaris.org/jive/thread.jspa?threadID=62493tstart=0

--Ben
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Util to remove (s)log from remaining vdevs?

2008-06-04 Thread Jeb Campbell
After having to reset my i-ram card, I can no longer import my raidz pool on 
2008.05.

Also trying to import the pool using the zpool.cache causes a kernel panic on 
2008.05 and B89 (I'm waiting to try B90 when released).

So I have 2 options:
* Wait for a release that can import after log failure... (no time frame ATM)
* Use a util that removes the log vdev info from the remaining vdevs.

Would such a util be the easiest/fastest way to get going?  If so, I'd be 
willing to pay up to $500 for a solution in the next 2 weeks.  I know that's 
not much, but I'm just a home user.

Any other recommendations on getting this going would be appreciated.

(Yes the super critical stuff was backed up elsewhere, but this was the 
backup/storage for everything else...)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] disk names?

2008-06-04 Thread Richard Elling
Bob Friesenhahn wrote:
 On Tue, 3 Jun 2008, Dave Miner wrote:
   
 Putting into the zpool command would feel odd to me, but I agree that
 there may be a useful utility here.
 

 There is value to putting this functionality in zpool for the same 
 reason that it was useful to put 'iostat' and other duplicate 
 functionality in zpool.  For example, zpool can skip disks which are 
 already currently in use, or it can recommend whole disks (rather than 
 partitions) if none of the logical disk partitions are currently in 
 use.
   

Nit: zpool iostat provides a different point of view on I/O than
iostat.  iostat cannot do what zpool iostat does, so it is not really
a case of duplicate functionality.  VxVM has a similar tool, vxstat.

Today, zpool uses libdiskmgt to determine if the devices are in use
or have existing file systems.  If found, it will fail, unless the -f 
(force)
flag is used.

I have done some work on making intelligent decisions about
using devices.  It is a non-trivial task for the general case. 
Consider that a X4500 has 281,474,976,710,655 possible
permutations for RAID-0 using only a single slice per disk,
it quickly becomes an exercise of compromise.  At present,
I'm adding this to RAIDoptmizer, but there I have the knowledge
of the physical layout of the system(s), which is difficult to
ascertain (guess) for the generic hardware case.  It may be
that in the long term we could add some sort of smarts to
zpool, but I'm not (currently) optimistic.

 The zfs commands are currently at least an order of magnitude easier 
 to comprehend and use than the legacy commands related to storage 
 devices.  It would be nice if the zfs commands will continue to 
 simplify what is now quite obtuse.
   

No doubt.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-04 Thread ddooniejunk
You may all have 'shared human errors', but i dont have that issue  
whatsoever :)  I find it quite interesting the issues that you guys  
bring up with these drives.  All manufactured goods suffer the same  
pitfalls of production.  Would you say that WD and Seagate are the  
front runners or just the best marketed?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] disk names?

2008-06-04 Thread Jeff Bonwick
I agree with that.  format(1M) and cfgadm(1M) are, ah, not the most
user-friendly tools.  It would be really nice to have 'zpool disks'
go out and taste all the drives to see which ones are available.

We already have most of the code to do it.  'zpool import' already
contains the taste-all-disks-and-slices logic, and 'zpool add'
already contains the logic to determine whether a device is in use.
Looks like all we're really missing is a call to printf()...

Is there an RFE for this?  If not, I'll file one.  I like the idea.

Jeff

On Wed, Jun 04, 2008 at 10:55:18AM -0500, Bob Friesenhahn wrote:
 On Tue, 3 Jun 2008, Dave Miner wrote:
 
  Putting into the zpool command would feel odd to me, but I agree that
  there may be a useful utility here.
 
 There is value to putting this functionality in zpool for the same 
 reason that it was useful to put 'iostat' and other duplicate 
 functionality in zpool.  For example, zpool can skip disks which are 
 already currently in use, or it can recommend whole disks (rather than 
 partitions) if none of the logical disk partitions are currently in 
 use.
 
 The zfs commands are currently at least an order of magnitude easier 
 to comprehend and use than the legacy commands related to storage 
 devices.  It would be nice if the zfs commands will continue to 
 simplify what is now quite obtuse.
 
 Bob
 ==
 Bob Friesenhahn
 [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] disk names?

2008-06-04 Thread Dave Miner
Jeff Bonwick wrote:
 I agree with that.  format(1M) and cfgadm(1M) are, ah, not the most
 user-friendly tools.  It would be really nice to have 'zpool disks'
 go out and taste all the drives to see which ones are available.
 
 We already have most of the code to do it.  'zpool import' already
 contains the taste-all-disks-and-slices logic, and 'zpool add'
 already contains the logic to determine whether a device is in use.
 Looks like all we're really missing is a call to printf()...
 
 Is there an RFE for this?  If not, I'll file one.  I like the idea.
 

No argument that it's useful, but I also believe that making format or
other tools friendlier is useful as well, since the problem here applies
to file systems that are not ZFS, too.  If I'm not using ZFS, it seems
unlike that my conceptual model is to go use zpool to figure out what's
on my disks.  So, sure, add something to zpool for that realm, but I 
think we should do something to format or another tool to address the 
non-ZFS case.

Dave

 Jeff
 
 On Wed, Jun 04, 2008 at 10:55:18AM -0500, Bob Friesenhahn wrote:
 On Tue, 3 Jun 2008, Dave Miner wrote:
 Putting into the zpool command would feel odd to me, but I agree that
 there may be a useful utility here.
 There is value to putting this functionality in zpool for the same 
 reason that it was useful to put 'iostat' and other duplicate 
 functionality in zpool.  For example, zpool can skip disks which are 
 already currently in use, or it can recommend whole disks (rather than 
 partitions) if none of the logical disk partitions are currently in 
 use.

 The zfs commands are currently at least an order of magnitude easier 
 to comprehend and use than the legacy commands related to storage 
 devices.  It would be nice if the zfs commands will continue to 
 simplify what is now quite obtuse.

 Bob
 ==
 Bob Friesenhahn
 [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Util to remove (s)log from remaining vdevs?

2008-06-04 Thread Jeb Campbell
This link worked for me:
http://www.opensolaris.org/jive/thread.jspa?messageID=190763

Thanks for the info, and that did have some good tips!

I think I might try hacking the label on the log device first.  I'll post if I 
get something working.

If that works, it might be best to make a bit for bit copy of the log device 
and gzip it.  It should be small assuming you are using a ram based device (and 
have zeroed it before making the label).

Now I just need some time to start digging into the on disk format...
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] disk names?

2008-06-04 Thread Bob Friesenhahn
On Wed, 4 Jun 2008, Jeff Bonwick wrote:

 I agree with that.  format(1M) and cfgadm(1M) are, ah, not the most
 user-friendly tools.  It would be really nice to have 'zpool disks'
 go out and taste all the drives to see which ones are available.

 We already have most of the code to do it.  'zpool import' already
 contains the taste-all-disks-and-slices logic, and 'zpool add'
 already contains the logic to determine whether a device is in use.
 Looks like all we're really missing is a call to printf()...

Make sure that the zpool devices command recommends the configurations 
preferred for zfs first and requires that the user add an extra option 
(e.g. -v) in order to discover less ideal device configurations.  For 
example, ZFS prefers whole disks so if the disk contains partitions 
but none of them are used, then it can simply suggest the whole disk, 
but if some partitions are used it can list the unused partitions 
along with a warning that using partitions is a non-ideal 
configuration.  If the devices are accessed via different 
paths/controllers, then that could be useful info for the user when 
selecting devices.

I appreciate that Richard Elling would like to suggest the best 
combination out of 281,474,976,710,655 permutations, but at some point 
the choice is best left to the educated user. :-)

The first challenge is to find the devices at all.  Optimization is 
for later.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-04 Thread MC
 You may all have 'shared human errors', but i dont
 have that issue  
 whatsoever :)  I find it quite interesting the issues
 that you guys  
 bring up with these drives.  All manufactured goods
 suffer the same  
 pitfalls of production.  Would you say that WD and
 Seagate are the  
 front runners or just the best marketed?

It depends on the model more than the make, so to say.  You're better off 
asking about this on a storage forum... but the short answer is it doesn't 
matter which disk you choose unless you have some specific goal in mind, in 
which case you might choose the disk with the best performance in this 
benchmark, or the lowest idle power usage, or... etc.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-04 Thread ddooniejunk
Very true, didnt mean to derail the topic, sorry.  Just trying to  
learn as i go!

Thanks again!


On 4-Jun-08, at 3:05 PM, MC wrote:

 You may all have 'shared human errors', but i dont
 have that issue
 whatsoever :)  I find it quite interesting the issues
 that you guys
 bring up with these drives.  All manufactured goods
 suffer the same
 pitfalls of production.  Would you say that WD and
 Seagate are the
 front runners or just the best marketed?

 It depends on the model more than the make, so to say.  You're  
 better off asking about this on a storage forum... but the short  
 answer is it doesn't matter which disk you choose unless you have  
 some specific goal in mind, in which case you might choose the disk  
 with the best performance in this benchmark, or the lowest idle  
 power usage, or... etc.


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] disk names?

2008-06-04 Thread Luke Scharf
MC wrote:
 Putting into the zpool command would feel odd to me, but I agree that
 there may be a useful utility here.
 

 There MAY be a useful utility here?  I know this isn't your fight Dave, but 
 this tipped me and I have to say something :)

 Can we agree that the format command lists the disks it can use because the 
 format command is part of the user-facing disk storage software stack and 
 listing what objects it can act upon is one of the most fundemental features 
 of the user-facing tools of the stack?!  It is for this reason alone that ZFS 
 needs a similar feature.  Because it makes sense.
   

I'd like to suggest a name: lsdisk and lspart to list the disks and also 
the disks/partitions that are available.  (Or maybe lsdisk should just 
list the disks and partitions in an indented list?  Listing the 
partitions is important.  Listing the controllers might not hurt 
anything, either.)

Linux has lspci[0], lsscsi, lsusb, lsof, and a number of other 
ls-tab-tab utilities out-of-the-box[1].  These utilities would be quite 
intuitive for folks who've learned Linux first, and would help people 
transition to Solaris quickly.

When I first learned Solaris (some years ago now), it took me a 
surprisingly long time to get the device naming scheme and the partition 
numbering.  The naming/numbering is quite intuitive (except for that 
part about c0t0d0s2 being the entire device[1]), but I would have felt 
that I understood it quicker if I'd seen a nice listing that matches the 
concept, and also had quick way to find out the name of that disk that I 
just plugged in.  My friends who are new to Solaris seem to have the 
same problem out of the gate.

-Luke

[0] Including lspci and lsusb with Solaris would be a great idea -- 
prtconf and prtdiag are very useful, but lspci is very quick, clear, and 
concise.  IIRC, lspci is included in Nexenta.  I haven't checked for 
these utilities on the new OpenSolaris yet, though, so maybe they're 
there already.

[1] Since Solaris 10 still uses /bin/sh as the root shell, I feel that I 
must explain that this is tab completion.  In bash/zsh/tcsh, hitting tab 
twice searches the $PATH for ls* and displays the results  I know 
that most-everyone on the list already knows this, but I can't help my 
self!  [ducks!]

[2]  If I'm giving someone a tour of Solaris administration, /dev/sda 
isn't particularly different from /dev/dsk/c0t0d0.  But if I open 
/dev/dsk/c0t0d0s2 with a partitioning tool, repartition, then 
build/mount a filesystem without Something Bad happening, then my 
spectators heads usually explode.  After that, they don't believe me 
when I tell them that they mostly understand what's going on.  Yes, ZFS 
and the EFI disklabels fix this when you have a system with a ZFS root 
and no UFS disks -- but UFS is still necessary in a lot of 
configuration, so this kind of system-quirk should be made obvious to 
Unix-literate people coming from non-Solaris backgrounds.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk names?

2008-06-04 Thread Volker A. Brandt
 I'd like to suggest a name: lsdisk and lspart to list the disks and also
 the disks/partitions that are available.  (Or maybe lsdisk should just
 list the disks and partitions in an indented list?  Listing the
 partitions is important.  Listing the controllers might not hurt
 anything, either.)

Hmmm... the current scheme seems to be subject verb object.
E.g.

   disk list

or

   disk info c1t9d0

   disk info -p c1*


With partitions, I assume you mean slices.  Partitions are outside
of Solaris and managed by fdisk.

 Linux has lspci[0], lsscsi, lsusb, lsof, and a number of other
 ls-tab-tab utilities out-of-the-box[1].  These utilities would be quite
 intuitive for folks who've learned Linux first, and would help people
 transition to Solaris quickly.

Do we really need to bend and warp everything to suit Linux switchers?
(only half :-).

 When I first learned Solaris (some years ago now), it took me a
 surprisingly long time to get the device naming scheme and the partition
 numbering.  The naming/numbering is quite intuitive (except for that
 part about c0t0d0s2 being the entire device[1]), but I would have felt
 that I understood it quicker if I'd seen a nice listing that matches the
 concept, and also had quick way to find out the name of that disk that I
 just plugged in.  My friends who are new to Solaris seem to have the
 same problem out of the gate.

I did not have this experience.  I came from BSD where there were
things like /dev/sd0d (which also exist on Solaris), but the Sun
way was not too strange...

 [0] Including lspci and lsusb with Solaris would be a great idea --

Well, there is scanpci.

 [1] Since Solaris 10 still uses /bin/sh as the root shell, I feel that I
 must explain that this is tab completion.  In bash/zsh/tcsh, hitting tab
 twice searches the $PATH for ls* and displays the results  I know
 that most-everyone on the list already knows this, but I can't help my
 self!  [ducks!]

At the risk of outing myself as a hardcore nit picker, in tcsh
it is Control-D, and only once, not twice. :-)


 [2]  If I'm giving someone a tour of Solaris administration, /dev/sda
 isn't particularly different from /dev/dsk/c0t0d0.  But if I open
 /dev/dsk/c0t0d0s2 with a partitioning tool, repartition, then
 build/mount a filesystem without Something Bad happening, then my
 spectators heads usually explode.  After that, they don't believe me
 when I tell them that they mostly understand what's going on.  Yes, ZFS
 and the EFI disklabels fix this when you have a system with a ZFS root
 and no UFS disks -- but UFS is still necessary in a lot of
 configuration, so this kind of system-quirk should be made obvious to
 Unix-literate people coming from non-Solaris backgrounds.

Maybe it's because I have a Solaris background, but I fail to see
any quirk here...


Best regards -- Volker
-- 

Volker A. Brandt  Consulting and Support for Sun Solaris
Brandt  Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED]
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 45
Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Get your SXCE on ZFS here!

2008-06-04 Thread andrew
With the release of the Nevada build 90 binaries, it is now possible to install 
SXCE directly onto a ZFS root filesystem, and also put ZFS swap onto a ZFS 
filesystem without worrying about having it deadlock. ZFS now also supports 
crash dumps!

To install SXCE to a ZFS root, simply use the text-based installer, after 
choosing Solaris Express from the boot menu on the DVD.

DVD download link:

http://www.opensolaris.org/os/downloads/sol_ex_dvd_1/
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-04 Thread Rich Teer
On Wed, 4 Jun 2008, Henrik Johansson wrote:

 Anyone knows what the deal with /export/home is? I though /home was  
 the default home directory in Solaris?

Nope, /export/home has always been the *physical* location for
users' home directories.  They're usually automounted under /home,
though.

-- 
Rich Teer, SCSA, SCNA, SCSECA

CEO,
My Online Home Inventory

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-04 Thread Henrik Johansson
On Jun 5, 2008, at 12:05 AM, Rich Teer wrote:

 On Wed, 4 Jun 2008, Henrik Johansson wrote:

 Anyone knows what the deal with /export/home is? I though /home was
 the default home directory in Solaris?

 Nope, /export/home has always been the *physical* location for
 users' home directories.  They're usually automounted under /home,
 though.

You are right, it was my own old habit of creating my own physical  
directory under /home for stand alone machines that got me confused.  
But filesystem(5) says: /home Default root of a subtree for user  
directories. and useradd:s base_dir defaults to /home, where it tries  
to create a directory if used only with the -m flag.

I know this doesn't  work when the automounter is available, but it  
can be disabled or reconfigured. When i think about it /export/home  
has been created earlier also, with UFS.

It's fun how old things can get one confused in a new context ;)

Regards
Henrik
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Get your SXCE on ZFS here!

2008-06-04 Thread Tim
On Wed, Jun 4, 2008 at 5:01 PM, Kyle McDonald [EMAIL PROTECTED] wrote:

 andrew wrote:
  With the release of the Nevada build 90 binaries, it is now possible to
 install SXCE directly onto a ZFS root filesystem, and also put ZFS swap onto
 a ZFS filesystem without worrying about having it deadlock. ZFS now also
 supports crash dumps!
 
  To install SXCE to a ZFS root, simply use the text-based installer, after
 choosing Solaris Express from the boot menu on the DVD.
 
  DVD download link:
 
  http://www.opensolaris.org/os/downloads/sol_ex_dvd_1/
 
 
 This release also (I beleive) supports installing on ZFS through JumpStart.

 Does anyone have a pointer for Docs on what the syntax is for a
 JumpStart profile to configure ZFS root?

  -Kyle

 
  This message posted from opensolaris.org
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Does this mean zfs boot/root on sparc is working as well?  If so... FINALLY
:)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk names?

2008-06-04 Thread Luke Scharf
Volker A. Brandt wrote:
 Hmmm... the current scheme seems to be subject verb object.
 E.g.

disk list

That would work fine for me!

It would also be easy enough to put on a rosetta stone type reference 
card.

 [0] Including lspci and lsusb with Solaris would be a great idea --
 

 Well, there is scanpci.
   

I'm embarrased to admit that scanpci hadn't lodged in my long-term 
memory until now...  But why is it in /usr/X11/bin/, and not in 
/usr/sbin/ like the other utilities that inventory the hardware?

 When I first learned Solaris (some years ago now), it took me a
 surprisingly long time to get the device naming scheme and the partition
 numbering.  The naming/numbering is quite intuitive (except for that
 part about c0t0d0s2 being the entire device[1]), but I would have felt
 that I understood it quicker if I'd seen a nice listing that matches the
 concept, and also had quick way to find out the name of that disk that I
 just plugged in.  My friends who are new to Solaris seem to have the
 same problem out of the gate.
 

 I did not have this experience.  I came from BSD where there were
 things like /dev/sd0d (which also exist on Solaris), but the Sun
 way was not too strange...
   

/dev/c0t0d0 isn't strange.  Merely unfamiliar to a Linux switcher.  
There just wasn't a way to confirm that one was reading this list 
correctly, at the time.  And running format seemed like a dangerous 
kludge, since I didn't want to /modify/ the disks at all.

 [2]  If I'm giving someone a tour of Solaris administration, /dev/sda
 isn't particularly different from /dev/dsk/c0t0d0.  But if I open
 /dev/dsk/c0t0d0s2 with a partitioning tool, repartition, then
 build/mount a filesystem without Something Bad happening, then my
 spectators heads usually explode.  After that, they don't believe me
 when I tell them that they mostly understand what's going on.  Yes, ZFS
 and the EFI disklabels fix this when you have a system with a ZFS root
 and no UFS disks -- but UFS is still necessary in a lot of
 configuration, so this kind of system-quirk should be made obvious to
 Unix-literate people coming from non-Solaris backgrounds.
 

 Maybe it's because I have a Solaris background, but I fail to see
 any quirk here...
   

I can explain why a partition with a Solaris disklabel inside an MSDOS 
partition makes sense (my apologies for sloppy use of the terminology 
earlier), and I can explain why BSD uses a similar technique when 
running on a system where the boot-firmware only understands MSDOS style 
disklabels But, the s2 thing is f-ed up, conceptually speaking:

   1. Calling s2 a slice makes it a subset of the device, not the whole
  device.  But it's the whole device.
   2. The number s2 is arbitrary.  If it were s0, then there would at
  least be the beginning of the list.  If it were s3, it would be at
  the end of a 2-bit list, which could be explained historically. 
  If it were s7, it would be at the end of the list.  But, no, it's
  s2.  Why?
   3. None of the grey-haired Solaris gurus that I've talked to have
  ever been able to explain why.
   4.  Can you use s2 when the disk-label is corrupted?  If it's the
  whole disk, maybe you can.  If it's an entry in the disklabel,
  maybe you can't.  I seem to remember that I can write to s2 in
  order to fix a borked label, but that makes no sense, and I don't
  believe it.  Maybe I'm loosing my mind.  But, then again, my point
  is that the s2 thing makes me loose my mind.  :-)

Using c0t0d0 to refer to the whole device (which works in some cases, 
such as zpool create) makes a lot of sense.  If it's a slice, it really 
should be a slice. 

I just accept that the s2 thing is the way it is, and Solaris disk 
management isn't difficult or mysterious...  But, these kind of quirks 
really did derail my learning process, and I've seen it derail the 
learning process of others, too.  I realize that changing a fundamental 
part of the system for the sake of aesthetics would cause a lot of chaos 
for little benefit...  So I'm just suggesting that the normal disk tools 
be changed to make this quirk obvious at a glance -- so that the 
uninitiated may learn the system more easily.

-Luke

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk names?

2008-06-04 Thread Luke Scharf
Luke Scharf wrote:
3. None of the grey-haired Solaris gurus that I've talked to have
   ever been able to explain why.
   

I do realize that older architectures needed some way to record the disk 
geometry.  But why do it that way?

-Luke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk names?

2008-06-04 Thread Carson Gaspar
Luke Scharf wrote:

1. Calling s2 a slice makes it a subset of the device, not the whole
   device.  But it's the whole device.
2. The number s2 is arbitrary.  If it were s0, then there would at
   least be the beginning of the list.  If it were s3, it would be at
   the end of a 2-bit list, which could be explained historically. 
   If it were s7, it would be at the end of the list.  But, no, it's
   s2.  Why?
3. None of the grey-haired Solaris gurus that I've talked to have
   ever been able to explain why.
4.  Can you use s2 when the disk-label is corrupted?  If it's the
   whole disk, maybe you can.  If it's an entry in the disklabel,
   maybe you can't.  I seem to remember that I can write to s2 in
   order to fix a borked label, but that makes no sense, and I don't
   believe it.  Maybe I'm loosing my mind.  But, then again, my point
   is that the s2 thing makes me loose my mind.  :-)

Slice 2 is historically the whole disk slice (since long before 
Solaris). The extra encapsulation layer on x86 hardware caused by 
ancient crappy BIOSes is unfortunate.

And s2 isn't special in any way except convention. _Any_ slice that 
starts at sector 0 (and ends far enough away) will behave the same. If 
your disk label is broken/missing/whatever, you won't _have_ an s2.

On x86en, you should probably use p0 for whole-disk operations...

-- 
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Get your SXCE on ZFS here!

2008-06-04 Thread Cindy Swearingen
Tim,

Start at the zfs boot page, here:

http://www.opensolaris.org/os/community/zfs/boot/

Review the information and follow the links to the docs.

Cindy

- Original Message -
From: Tim [EMAIL PROTECTED]
Date: Wednesday, June 4, 2008 4:29 pm
Subject: Re: [zfs-discuss] Get your SXCE on ZFS here!
To: Kyle McDonald [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org, andrew [EMAIL PROTECTED]

 On Wed, Jun 4, 2008 at 5:01 PM, Kyle McDonald [EMAIL PROTECTED] 
 wrote:
 
  andrew wrote:
   With the release of the Nevada build 90 binaries, it is now 
 possible to
  install SXCE directly onto a ZFS root filesystem, and also put ZFS 
 swap onto
  a ZFS filesystem without worrying about having it deadlock. ZFS now 
 also
  supports crash dumps!
  
   To install SXCE to a ZFS root, simply use the text-based 
 installer, after
  choosing Solaris Express from the boot menu on the DVD.
  
   DVD download link:
  
   http://www.opensolaris.org/os/downloads/sol_ex_dvd_1/
  
  
  This release also (I beleive) supports installing on ZFS through JumpStart.
 
  Does anyone have a pointer for Docs on what the syntax is for a
  JumpStart profile to configure ZFS root?
 
   -Kyle
 
  
   This message posted from opensolaris.org
   ___
   zfs-discuss mailing list
   zfs-discuss@opensolaris.org
   http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
 
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 
 Does this mean zfs boot/root on sparc is working as well?  If so... FINALLY
 :)
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS problems with USB Storage devices

2008-06-04 Thread Ricardo M. Correia
On Ter, 2008-06-03 at 23:33 +0100, Paulo Soeiro wrote:
 6)Remove and attached the usb sticks:
 
 zpool status
 pool: myPool
 state: UNAVAIL
 status: One or more devices could not be used because the label is
 missing 
 or invalid. There are insufficient replicas for the pool to continue
 functioning.
 action: Destroy and re-create the pool from a backup source.
 see: http://www.sun.com/msg/ZFS-8000-5E
 scrub: none requested
 config:
 NAME STATE READ WRITE CKSUM
 myPool UNAVAIL 0 0 0 insufficient replicas
 mirror UNAVAIL 0 0 0 insufficient replicas
 c6t0d0p0 FAULTED 0 0 0 corrupted data
 c7t0d0p0 FAULTED 0 0 0 corrupted data


This could be a problem of USB devices getting renumbered (or something
to that effect).
Try doing zpool export myPool and zpool import myPool at this point,
it should work fine and you should be able to get your data back.

Cheers,
Ricardo

--

Ricardo Manuel Correia
Lustre Engineering

Sun Microsystems, Inc.
Portugal
Phone +351.214134023 / x58723
Mobile +351.912590825
Email [EMAIL PROTECTED]
attachment: 6g_top.gif___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk names?

2008-06-04 Thread A Darren Dunham
On Wed, Jun 04, 2008 at 06:28:58PM -0400, Luke Scharf wrote:
2. The number s2 is arbitrary.  If it were s0, then there would at
   least be the beginning of the list.  If it were s3, it would be at
   the end of a 2-bit list, which could be explained historically. 
   If it were s7, it would be at the end of the list.  But, no, it's
   s2.  Why?
3. None of the grey-haired Solaris gurus that I've talked to have
   ever been able to explain why.

This appears to be convention on (some?) BSDs from before Solaris
existed.

Best story I've heard is that it dates from before the time when
modifiable (or at least *easily* modifiable) slices didn't exist.  No
hopping into 'format' or using 'fmthard'.  Instead, your disk came with
an entry in 'format.dat' with several fixed slices.

To give some flexibility, certain slices were arranged around common use
cases.

a and b consumed an early part of the disk, in a pattern often usable
as root and swap.
c consumed everything, so you could use that for the whole disk instead.
d,e,f,g consumed the later part of the disk for /var,/usr,etc.
h consumed the exact area that f and g did.

So you could use the entire disk with any of:
a,b,d,e,f,g
a,b,d,e,h
c

without having to change the label.

I speculate that then utilities were written that used c/2 for
information about the entire disk and people thought keeping the
convention going was good.

4.  Can you use s2 when the disk-label is corrupted?

You can't use any slices when the disk label is corrupted.  The 'sd'
driver won't allow you access to them unless it's happy with the label.
Trying with 'dd' will give you an I/O error.

   If it's the
   whole disk, maybe you can.  If it's an entry in the disklabel,
   maybe you can't.  I seem to remember that I can write to s2 in
   order to fix a borked label, but that makes no sense, and I don't
   believe it.  Maybe I'm loosing my mind.  But, then again, my point
   is that the s2 thing makes me loose my mind.  :-)

s2 is special only by convention, not by code.  The sd driver can create
a label without a label existing (and therefore without any slices
present).

You can later use access to block 0 (via any slice) to corrupt (...er
*modify*) that label, but that's not a feature of s2.  s0 would do it as
well with the way most disks are labled (because it also contains
cylinder 0/block 0.)

 Using c0t0d0 to refer to the whole device (which works in some cases, 
 such as zpool create) makes a lot of sense.  If it's a slice, it really 
 should be a slice. 

Historically that wasn't done, and it only works today if the
application is handling it specifically or if you have an EFI label.  

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Get your SXCE on ZFS here!

2008-06-04 Thread Ellis, Mike
The FAQ document (
http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/ ) has a
jumpstart profile example:

install_type initial_install
pool newpool auto auto auto mirror c0t0d0 c0t1d0
bootenv installbe bename sxce_xx 

The B90 jumpstart check program (SPARC) flags that the disks should 
be specified as: c0t0d0s0 c0t1d0s0 (slices)

Can someone confirm the FAQ is indeed incorrect  perhaps make the
adjustment to the FAQ if so warranted?

Thanks,

 -- MikeE



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Cindy
Swearingen
Sent: Wednesday, June 04, 2008 6:50 PM
To: Tim
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Get your SXCE on ZFS here!

Tim,

Start at the zfs boot page, here:

http://www.opensolaris.org/os/community/zfs/boot/

Review the information and follow the links to the docs.

Cindy

- Original Message -
From: Tim [EMAIL PROTECTED]
Date: Wednesday, June 4, 2008 4:29 pm
Subject: Re: [zfs-discuss] Get your SXCE on ZFS here!
To: Kyle McDonald [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org, andrew [EMAIL PROTECTED]

 On Wed, Jun 4, 2008 at 5:01 PM, Kyle McDonald [EMAIL PROTECTED] 
 wrote:
 
  andrew wrote:
   With the release of the Nevada build 90 binaries, it is now 
 possible to
  install SXCE directly onto a ZFS root filesystem, and also put ZFS 
 swap onto
  a ZFS filesystem without worrying about having it deadlock. ZFS now 
 also
  supports crash dumps!
  
   To install SXCE to a ZFS root, simply use the text-based 
 installer, after
  choosing Solaris Express from the boot menu on the DVD.
  
   DVD download link:
  
   http://www.opensolaris.org/os/downloads/sol_ex_dvd_1/
  
  
  This release also (I beleive) supports installing on ZFS through
JumpStart.
 
  Does anyone have a pointer for Docs on what the syntax is for a
  JumpStart profile to configure ZFS root?
 
   -Kyle
 
  
   This message posted from opensolaris.org
   ___
   zfs-discuss mailing list
   zfs-discuss@opensolaris.org
   http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
 
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 
 Does this mean zfs boot/root on sparc is working as well?  If so...
FINALLY
 :)
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] new install - when is zfs root offered? (snv_90)

2008-06-04 Thread Boyd Adamson
A Darren Dunham [EMAIL PROTECTED] writes:

 On Tue, Jun 03, 2008 at 05:56:44PM -0700, Richard L. Hamilton wrote:
 How about SPARC - can it do zfs install+root yet, or if not, when?
 Just got a couple of nice 1TB SAS drives, and I think I'd prefer to
 have a mirrored pool where zfs owns the entire drives, if possible.
 (I'd also eventually like to have multiple bootable zfs filesystems in
 that pool, corresponding to multiple versions.)

 Is they just under 1TB?  I don't believe there's any boot support in
 Solaris for EFI labels, which would be required for 1TB+.

ISTR that I saw an ARC case go past about a week ago about extended SMI
labels to allow  1TB disks, for exactly this reason.

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Get your SXCE on ZFS here!

2008-06-04 Thread Brian Hechinger
On Wed, Jun 04, 2008 at 09:17:05PM -0400, Ellis, Mike wrote:
 The FAQ document (
 http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/ ) has a
 jumpstart profile example:

Speaking of the FAQ and mentioning the need to use slices, how does that
affect the ability of Solaris/ZFS to automatically enable the disk's
cache?  Does it need to be manually over-ridden (unlike giving ZFS the
whole disk where it automatically turns the disk cache on)?

Also, how can you check if the disk's cache has been enabled or not?

Thanks,

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Get your SXCE on ZFS here!

2008-06-04 Thread Nathan Kroenert
format -e is your window to cache settings.

As for the auto-enabling, I'm not sure, as IIRC, we do different things 
based on disk technology.

eg: IDE + SATA - Always enabled
 SCSI - Disabled by default, unless you give ZFS the whole disk.

I think.

On a couple of my systems, this seems to ring true.

Not at all sure about SAS.

If I'm wrong here, hopefully someone else will provide the complete set 
of logic for determining cache enabling semantics.

:)

Nathan.

Brian Hechinger wrote:
 On Wed, Jun 04, 2008 at 09:17:05PM -0400, Ellis, Mike wrote:
 The FAQ document (
 http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/ ) has a
 jumpstart profile example:
 
 Speaking of the FAQ and mentioning the need to use slices, how does that
 affect the ability of Solaris/ZFS to automatically enable the disk's
 cache?  Does it need to be manually over-ridden (unlike giving ZFS the
 whole disk where it automatically turns the disk cache on)?
 
 Also, how can you check if the disk's cache has been enabled or not?
 
 Thanks,
 
 -brian

-- 
//
// Nathan Kroenert  [EMAIL PROTECTED] //
// Technical Support Engineer   Phone:  +61 3 9869-6255 //
// Sun Services Fax:+61 3 9869-6288 //
// Level 3, 476 St. Kilda Road  //
// Melbourne 3004   VictoriaAustralia   //
//
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Get your SXCE on ZFS here!

2008-06-04 Thread Brian Hechinger
On Thu, Jun 05, 2008 at 12:14:46PM +1000, Nathan Kroenert wrote:
 format -e is your window to cache settings.

Ah ha!

 As for the auto-enabling, I'm not sure, as IIRC, we do different things 
 based on disk technology.
 
 eg: IDE + SATA - Always enabled
 SCSI - Disabled by default, unless you give ZFS the whole disk.
 
 I think.
 
 On a couple of my systems, this seems to ring true.

Well, I just checked the snv81 box that has ZFS root via Tim's script, and
I know I didn't enable the cache, so that definitely seems to be true for
SATA as the read and write cache of both drives are enabled.

Thanks for the pointer to format -e!

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Get your SXCE on ZFS here!

2008-06-04 Thread Uwe Dippel
Can someone in the know please provide a recipe to upgrade a nv81 (e.g.) to 
ZFS-root, if possible?
That would be, just listing the commands step by step for the uninitiated; for 
me.

Uwe
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-04 Thread Rich Teer
On Wed, 4 Jun 2008, Bob Friesenhahn wrote:

 Did you actually choose to keep / and /var combined?  Is there any 

THat's what I'd do...

 reason to do that with a ZFS root since both are sharing the same pool 
 and so there is no longer any disk space advantage?  If / and /var are 
 not combined can they have different assigned quotas without one 
 inheriting limits from the other?

Why would one do that?  Just keep an eye on the root pool and all is good.

-- 
Rich Teer, SCSA, SCNA, SCSECA

CEO,
My Online Home Inventory

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-04 Thread Erik Trimble
Rich Teer wrote:
 On Wed, 4 Jun 2008, Bob Friesenhahn wrote
 Did you actually choose to keep / and /var combined?  Is there any 
 

 THat's what I'd do...
   
 reason to do that with a ZFS root since both are sharing the same pool 
 and so there is no longer any disk space advantage?  If / and /var are 
 not combined can they have different assigned quotas without one 
 inheriting limits from the other?
 

 Why would one do that?  Just keep an eye on the root pool and all is good.
   

It's the old debate around possibly filling the / partition, should you 
get some sort of explosion in /var  (crash dumps are a big culprit here). 

IMHO, even though with ZFS making / and /var separate filesystems in the 
same zpool is trivial, I don't see the benefit - it's just something 
more to manage and pay attention to.  If you have something very large 
going on in /var  (like mail spools, or whatever), you'll be creating a 
new zpool for that purpose.  Otherwise, filling /var can be _bad_ (even 
if on a different ZFS filesystem), so I don't see much benefit.

But, with ZFS, the counter (it's so simple, why not?) is also valid.  
It's just personal whim, now, really.

-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-04 Thread Nathan Kroenert
I'd expect it's the old standard.

if /var/tmp is filled, and that's part of /, then bad things happen.

there are often other places in /var that are writable by more than 
root, and always the possibility that something barfs heavily into syslog.

Since the advent of reasonably sized disks, I know many don't consider 
this an issue these days, but I'd still be inclined to keep /var (and 
especially /var/tmp) separated from /

In ZFS, this is, of course, just two filesystems in the same pool, with 
differing quotas...

:)

Nathan.

Rich Teer wrote:
 On Wed, 4 Jun 2008, Bob Friesenhahn wrote:
 
 Did you actually choose to keep / and /var combined?  Is there any 
 
 THat's what I'd do...
 
 reason to do that with a ZFS root since both are sharing the same pool 
 and so there is no longer any disk space advantage?  If / and /var are 
 not combined can they have different assigned quotas without one 
 inheriting limits from the other?
 
 Why would one do that?  Just keep an eye on the root pool and all is good.
 

-- 
//
// Nathan Kroenert  [EMAIL PROTECTED] //
// Technical Support Engineer   Phone:  +61 3 9869-6255 //
// Sun Services Fax:+61 3 9869-6288 //
// Level 3, 476 St. Kilda Road  //
// Melbourne 3004   VictoriaAustralia   //
//
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-04 Thread Ellis, Mike
In addition to the standard containing the carnage arguments used to
justify splitting /var/tmp, /var/mail, /var/adm (process accounting
etc), is there an interesting use-case where would one split out /var
for compression reasons (as in, turn on compression for /var so that
process accounting, network flow, and other such fun logs can be kept on
a compressed filesystem (while keeping / (and thereby /usr etc.)
uncompressed?) 

The ZFSBOOT-FAQ document doesn't really show how to break out multiple
filesystems with jumpstart profiles... An example there might be
helpful...  (as its clear this is a frequently asked question :-)

Also a compression on, ditto-data-bits on, (or perhaps a generic
place to insert zpool/zfs parameters) as part of the jumpstart profiles
could also be useful...

If SSD is coming fast and furious, being able to use compression, shared
free-space (quotas etc) to keep the boot-images small enough so they'll
fit and accommodate live-upgrade patching, will become increasingly
important.

http://www.networkworld.com/news/2008/060308-sun-flash-storage.html?page
=1

Rock on guys,

 -- MikeE




-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Rich Teer
Sent: Thursday, June 05, 2008 12:19 AM
To: Bob Friesenhahn
Cc: ZFS discuss
Subject: Re: [zfs-discuss] ZFS root finally here in SNV90

On Wed, 4 Jun 2008, Bob Friesenhahn wrote:

 Did you actually choose to keep / and /var combined?  Is there any 

THat's what I'd do...

 reason to do that with a ZFS root since both are sharing the same pool

 and so there is no longer any disk space advantage?  If / and /var are

 not combined can they have different assigned quotas without one 
 inheriting limits from the other?

Why would one do that?  Just keep an eye on the root pool and all is
good.

-- 
Rich Teer, SCSA, SCNA, SCSECA

CEO,
My Online Home Inventory

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss