[zfs-discuss] Difference between add and attach a device?

2007-06-14 Thread Rick Mann
Hi. I've been reading the ZFS admin guide, and I don't understand the 
distinction between adding a device and attaching a device to a pool?

TIA
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Partitioning Questions

2007-06-14 Thread Matthew Ahrens

George Plymale wrote:

Couple of questions regarding ZFS:

First, can slices and vdevs be removed from a pool?  It appears to
only want to remove a hotspare from a pool, which makes sense, however
is there some work around that will migrate data off of a vdev and
thus allow you to remove it?  (In this case it was simply a small
slice that I wanted to yank out.)


No; we're working on it.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Difference between add and attach a device?

2007-06-14 Thread Neil . Perrin

Rick Mann wrote:

Hi. I've been reading the ZFS admin guide, and I don't understand the distinction between 
adding a device and attaching a device to a pool?


attach is used to create or add a side to a mirror.
add is to add a new top level vdev where that can be a raidz, mirror
or single device. Writes are spread across top level vdevs.

Hope that helps. Perhaps the zpool man page is clearer.

 Neil.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Root raidz without boot

2007-06-14 Thread Matthew Ahrens

Ross Newell wrote:

What are this issues preventing the root directory being stored on raidz?
I'm talking specifically about root, and not boot which I can see would be
difficult.

Would it be something an amateur programmer could address in a weekend, or
is it more involved?


I believe this used to work with the old zfs mountroot setup with 
boot-from-UFS then root-on-ZFS.  So, you could probably hack it back in, or 
just use the old bits.  Lin or Lori could probably provide more details, or 
correct me...


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS wastesd diskspace?

2007-06-14 Thread Matthew Ahrens

Samuel Borgman wrote:

I just started to use zfs after longing to try it out for a long while now. The problem 
is that I've lost 240Gb out of 700Gb

I have single 700G pool on a 3510 HW raid mounted on /nm4/data running 


# du -sk /nm4/data
411025338   /nm4/data

While a

# df -hk
Filesystem size   used  avail capacity  Mounted on
nm4/data   669G   648G21G97%/nm4/data

I have under 50 files here so it's not that the majority of the are between 
2-54Gb in size.


Please provide the output of 'zfs list' and explain how that differs from 
what you expect to see and why.  (For the 'why', try running 'du -hs 
mountpoint'.)


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moving files over with ufsrestore not that simple

2007-06-14 Thread Matthew Ahrens

[EMAIL PROTECTED] wrote:

After one aborted ufsrestore followed by some cleanup I tried
to restore again but this time ufsrestore faultered with:

bad filesystem block size 2560

The reason was this return value for the stat of . of the
filesystem:

8339:   stat(., 0xFFBFF818)   = 0
8339:   d=0x03F50005 i=3 m=0040755 l=3  u=0 g=3 sz=3
8339:   at = Jun  7 23:11:54 CEST 2007  [ 1181250714 ]
8339:   mt = Jun  7 23:02:08 CEST 2007  [ 1181250128 ]
8339:   ct = Jun  7 23:02:08 CEST 2007  [ 1181250128 ]
8339:   bsz=2560  blks=3 fs=zfs


I could not reproduce this problem with some cursory testing 
(ufsdump|ufsrestore of a root filesystem).  But feel free to file a bug 
against ufsrestore -- it shouldn't care what the st_blocksize of a directory 
is.  Perhaps it should be looking at struct statvfs's f_bsize?


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs reports small st_size for directories?

2007-06-14 Thread Casper . Dik

My question:  What apps are these?  I heard mention of some SunOS 4.x 
library.  I don't think that's anywhere near important enough to warrant 
changing the current ZFS behavior.

Not apps; NFS clients such as *BSD.

On Solaris the issue is next to non-existant (SunOS 4.x binaries using
scandir(); and we can patch that)

But we don't control all the NFS clients and their runtimes; it is
unclear those are even fixable in some cases.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs reports small st_size for directories?

2007-06-14 Thread Casper . Dik

I went hunting for more apps in the hundreds of ports installed at my
shop to see what our exposure was to the scandir() problem - much to
my surpise out of 700 or so ports, only a dozen or so used the libc
scandir().  A handful of mail programs had a vulnerable local
implementation of scandir() - looks like they copied UW's imap code which
was based on the 4.2 BSD code.  It's not clear to me that those get used if
there's an OS implementation of scandir, but I'll write to them too.

We only recently added scandir to Solaris libc.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Mac OS X.5 Leopard: zfs works; sudo kextload /System/Library/zfs.kext

2007-06-14 Thread G.W.
I know it's a pain, but you have to spend money to download Apple's betas, that 
is, pay their developer fee.  If, however, this might inspire you to do this, 
you should know that zfs will run (read and write) on the latest build of 
Leopard, as Apple has (somewhat cryptically) said.  Apple also has a 
non-disclosure clause on their developer memberships, but they appear to have 
already made a number of public statements about zfs in Leopard.  So, here's a 
generic (and clumsy) way to enable kernel extensions on a BSD system, of which 
Leopard is a variant (actually, it runs over a version of Darwin).  And zfs is 
a kernel extension, and can be loaded like any other.  The zpool and zfs 
commands below you already know if you follow this thread.

Try this in terminal:

% cd /System/Library/Extensions
% ls -alF | less # This will show you all the kernel extensions, *.kext, in a 
pager
[hit the space bar to page forward; on the last page you should see:
...
drwxr-xr-x   3 root wheel  102 ... ntfs.kext/
drwxr-xr-x   3 root wheel  102 ... smbfs.kext/
drwxr-xr-x   3 root wheel  102 ... udf.kext/
drwxr-xr-x   3 root wheel  102 ... webdav_fs.kext/
drwxr-xr-x   3 root wheel  102 ... zfs.kext/
(END)
# hit q; this gets you back to the terminal
...

If you see zfs.kext, then the installer did indeed put it on your system.  Then:

% sudo kextload zfs.kext
password:  # enter your admin password; if that doesn't work, become 
root with su

You will get some error messages about the cache, probably from the files 
Extensions.kextcache and Extensions.mkext.  But, zfs will load (at least it 
will on a G5 dual 2.7).

zfs, zpool, now work, and man zfs, man zpool will give you a man page.

As far as I can tell, this is the process to load a kernel extension on any BSD 
system, of which Mac OS X/Darwin is one (the others are FreeBSD, NetBSD, 
OpenBSD).

HOWEVER, be aware that finder in almost any version of OSX tries to automount 
every possible file system.  Leopard does this as well; unlike zfs and zpool 
under Solaris, Leopard automounts any pool created or imported with zpool, and 
sets the mountpoint under /Volumes, WITHOUT running zfs create, or set 
mountpoint:

% zpool create zpool01 disk1

Automatically mounts in the finder and has the directory:

/Volumes/zpool01

Again, this happens WITHOUT RUNNING zfs, which is quite different from Solaris.

You will have to fiddle with permissions, and you might have to do something 
like:

sudo chmod -R /Volumes/zpool01 a+rwx

to make the entire pool writable (or some variant, g+rwx, etc.).  But it will 
work.

I haven't tried using zfs quota, set mountpoint=, set share=, but set 
compression=on seems to work, but I don't see much compression going on.

On reboot (or after a crash, which is frequent on beta builds) the finder will, 
initially, not have the zfs kernel extension enabled, and will ask if you want 
to format the disk (or slice, or however you set it up). Click ignore; DO NOT 
FORMAT THE DISK.  zfs already has, but the finder doesn't know it yet.

Repeat the kernel extension commands above.  Then run:

zpool import -f poolname

You can also try zpool scrub, but I'm not sure if that helps.

You should have all the files you copied on the zfs system before the crash 
(but no promises; mine were, but maybe yours will not).

You can try safe boot with Leopard (hold down the shift key on boot), and 
that might disable some problematic kernel extensions.

If someone knows how to modify Extensions.kextcache and Extensions.mkext, 
please let me know.  After the bugs are worked out, Leopard should be a pretty 
good platform.

Hope this helps.

G.W.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moving files over with ufsrestore not that simple

2007-06-14 Thread Casper . Dik

[EMAIL PROTECTED] wrote:
 After one aborted ufsrestore followed by some cleanup I tried
 to restore again but this time ufsrestore faultered with:
 
 bad filesystem block size 2560
 
 The reason was this return value for the stat of . of the
 filesystem:
 
 8339:   stat(., 0xFFBFF818)   = 0
 8339:   d=0x03F50005 i=3 m=0040755 l=3  u=0 g=3 sz=3
 8339:   at = Jun  7 23:11:54 CEST 2007  [ 1181250714 ]
 8339:   mt = Jun  7 23:02:08 CEST 2007  [ 1181250128 ]
 8339:   ct = Jun  7 23:02:08 CEST 2007  [ 1181250128 ]
 8339:   bsz=2560  blks=3 fs=zfs

I could not reproduce this problem with some cursory testing 
(ufsdump|ufsrestore of a root filesystem).  But feel free to file a bug 
against ufsrestore -- it shouldn't care what the st_blocksize of a directory 
is.  Perhaps it should be looking at struct statvfs's f_bsize?


The steps to reproduce are:

- create directory
- add some files
- remove some files

(compression enabled, if that matters)

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs and 2530 jbod

2007-06-14 Thread Frank Cusack

On June 13, 2007 7:51:21 PM -0500 Al Hopper [EMAIL PROTECTED] wrote:

It seems wasteful to (determine the required part number and) order a
2530 JBOD expansion shelf and then return it if it does not work out


It's a 2501 I think.


Obviously this is a huge hole in Suns' current storage product line.
There is a huge need for a ZFS saavy JBOD box with a high speed host
interconnect.  The nearest alternative I've been able to dig up is the
Promise VTrak J300S - but I'd rather buy a Sun (or StorageTek) part than
a Promise Tech part.  And, regardless of what the profit margin is, I'd
rather see Sun with that profit margin than Promise Tech.


Get a 6140 expansion shelf.  It is known to work.  Probably near to the
same $/GB as the 2530/2501 since the 2530 only takes SAS drives up to
300GB, instead of cheaper and larger capacity SATA drives.  (Ignoring
the fact that you need $$$ FC HBAs instead of $ SAS HBAs.)

There's clearly too much money to be made on the hardware still, to
start making cheap effective JBODs available.  You can pretty much
forget about selling the RAID hardware at that point.

Anyway I agree Sun should fill this hole, but the 2530 misses the mark.
I'd like to see a chassis that takes 750GB/1TB SATA drives, with SAS
host ports.  And sell just the chassis, so I can skip the 100%+ drive
markup.  I guess I'm looking for a Promise J300s, but at twice the
price (which is worth it to get better engineered hardware).

Of course there is thumper, which is an excellent value but which has
HA issues and a high buy-in price.

...

We've all seen the articles where someone declares that ZFS has upset the
status quo and will cause the hi-end dedicated hardware RAID vendors
major heartache.  This will happen *only* if there is an entire/complete
eco-system ready/willing/able to fill the gap between ZFS/host-based-RAID
and the encumbent hardware RAID providers.  And I'm talking about a
vendor with the technical credibility of Sun.com and not some
fly-by-night.storage.com outfit that no-one will trust. And then there is
netapp.com ...  which Sun/ZFS should declare a *target* of massive
opportunity and completely obliterate.


I don't know, I rather like that Sun is providing all the ammunition
for a Netapp competitor to start a war.  Why hasn't anyone done it?
It's still hard.  And expensive.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs reports small st_size for directories?

2007-06-14 Thread Frank Cusack

On June 13, 2007 11:26:07 PM -0400 Ed Ravin [EMAIL PROTECTED] wrote:

On Wed, Jun 13, 2007 at 09:42:26PM -0400, Ed Ravin wrote:

As mentioned before, NetBSD's scandir(3) implementation was one.  The
NetBSD project has fixed this in their CVS.  OpenBSD and FreeBSD's
scandir() looks like another, I'll have to drop them a line.


I heard from an OpenBSD developer who is working on a patch, but he hasn't
got a ZFS filesystem to test against.


He doesn't need one.  He can just change OpenBSD's ufs (or whatever fs he
wishes to use) to plug in random small numbers for st_size instead of
what it does now.  He could even do this on a 2nd system which exports
via NFS, to isolate the code under test.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs reports small st_size for directories?

2007-06-14 Thread Casper . Dik

On June 13, 2007 11:26:07 PM -0400 Ed Ravin [EMAIL PROTECTED] wrote:
 On Wed, Jun 13, 2007 at 09:42:26PM -0400, Ed Ravin wrote:
 As mentioned before, NetBSD's scandir(3) implementation was one.  The
 NetBSD project has fixed this in their CVS.  OpenBSD and FreeBSD's
 scandir() looks like another, I'll have to drop them a line.

 I heard from an OpenBSD developer who is working on a patch, but he hasn't
 got a ZFS filesystem to test against.

He doesn't need one.  He can just change OpenBSD's ufs (or whatever fs he
wishes to use) to plug in random small numbers for st_size instead of
what it does now.  He could even do this on a 2nd system which exports
via NFS, to isolate the code under test.


Or just start the code of with st_size /= 24


Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS API (again!), need quotactl(7I)

2007-06-14 Thread Pretorious
Issue with statvfs()


# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
mypool 108M   868M  26.5K  /mypool
mypool/home108M   868M  27.5K  /mypool/home
mypool/home/user-2 108M   868M   108M  /mypool/home/user-2

# df -h
Filesystem size   used  avail capacity  Mounted on
...
mypool 976M26K   868M 1%/mypool   
 used not in sync with 'zfs list'
mypool/home976M26K   868M 1%/mypool/home
mypool/home/user-2 976M   108M   868M12%/mypool/home/user-2

or my interpretation of zfs list incorrect ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] OT: extremely poor experience with Sun Download Manager

2007-06-14 Thread Graham Perrin
Intending to experiment with ZFS, I have been struggling with what  
should be a simple download routine.


Sun Download Manager leaves a great deal to be desired.

In the Online Help for Sun Download Manager there's a section on  
troubleshooting, but if it causes *anyone* this much trouble
http://fuzzy.wordpress.com/2007/06/14/sundownloadmanagercrap/ then  
it should, surely, be fixed.


Sun Download Manager -- a FORCED step in an introduction to  
downloadable software from Sun -- should be PROBLEM FREE in all  
circumstances. It gives an extraordinarily poor first impression.


If it can't assuredly be fixed, then we should not be forced to use it.

(True, I might have ordered rather than downloaded a DVD, but Sun  
Download Manager has given such a poor impression that right now I'm  
disinclined to pay.)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS wastesd diskspace?

2007-06-14 Thread Samuel Borgman
Thanks, here is some more info

# zpool status
  pool: nm4
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
nm4ONLINE   0 0 0
  c4t600C0FF007D22D2E07693000d0s0  ONLINE   0 0 0

errors: No known data errors


# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
nm4648G  21.7G  25.5K  /nm4
nm4/data   648G  21.7G   648G  /nm4/data

# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
nm4 680G648G   32.4G95%  ONLINE -

# zfs get all nm4/data
NAME PROPERTY   VALUE  SOURCE
nm4/data type   filesystem -
nm4/data creation   Mon May 21 21:58 2007  -
nm4/data used   648G   -
nm4/data available  21.7G  -
nm4/data referenced 648G   -
nm4/data compressratio  1.00x  -
nm4/data mountedyes-
nm4/data quota  none   default  
nm4/data reservationnone   default  
nm4/data recordsize 128K   default  
nm4/data mountpoint /nm4/data  default  
nm4/data sharenfs   offdefault  
nm4/data checksum   on default  
nm4/data compressionoffdefault  
nm4/data atime  on default  
nm4/data deviceson default  
nm4/data exec   on default  
nm4/data setuid on default  
nm4/data readonly   offdefault  
nm4/data zoned  offdefault  
nm4/data snapdirhidden default  
nm4/data aclmodegroupmask  default  
nm4/data aclinherit secure default  

# du -hs /nm4/data
 392G   /nm4/data


Thanks,

/Samuel
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS wastesd diskspace?

2007-06-14 Thread Samuel Borgman
Oh yeah I'm running Solaris 10

uname -a
SunOS jet 5.10 Generic_118855-36 i86pc i386 i86pc

/Samuel
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OT: extremely poor experience with Sun Download Manager

2007-06-14 Thread Ian Collins
Graham Perrin wrote:
 Intending to experiment with ZFS, I have been struggling with what
 should be a simple download routine.

 Sun Download Manager leaves a great deal to be desired.

 In the Online Help for Sun Download Manager there's a section on
 troubleshooting, but if it causes *anyone* this much trouble
 http://fuzzy.wordpress.com/2007/06/14/sundownloadmanagercrap/ then
 it should, surely, be fixed.

 Sun Download Manager -- a FORCED step in an introduction to
 downloadable software from Sun -- should be PROBLEM FREE in all
 circumstances. It gives an extraordinarily poor first impression.

You do not have to use the SDM, I never do.

Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OT: extremely poor experience with Sun Download Manager

2007-06-14 Thread Chris Ridd
On 14/6/07 11:16, Graham Perrin [EMAIL PROTECTED] wrote:

 Intending to experiment with ZFS, I have been struggling with what
 should be a simple download routine.
 
 Sun Download Manager leaves a great deal to be desired.
 
 In the Online Help for Sun Download Manager there's a section on
 troubleshooting, but if it causes *anyone* this much trouble
 http://fuzzy.wordpress.com/2007/06/14/sundownloadmanagercrap/ then
 it should, surely, be fixed.
 
 Sun Download Manager -- a FORCED step in an introduction to

Read Sun's page more carefully. It isn't a forced step, it is a
*recommended* step. (OK, a *highly recommended* step.)

There's nothing stopping you downloading each file separately using your web
browser or curl or something.

Cheers,

Chris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs reports small st_size for directories?

2007-06-14 Thread Joerg Schilling
Frank Cusack [EMAIL PROTECTED] wrote:

 On June 13, 2007 11:26:07 PM -0400 Ed Ravin [EMAIL PROTECTED] wrote:
  On Wed, Jun 13, 2007 at 09:42:26PM -0400, Ed Ravin wrote:
  As mentioned before, NetBSD's scandir(3) implementation was one.  The
  NetBSD project has fixed this in their CVS.  OpenBSD and FreeBSD's
  scandir() looks like another, I'll have to drop them a line.
 
  I heard from an OpenBSD developer who is working on a patch, but he hasn't
  got a ZFS filesystem to test against.

 He doesn't need one.  He can just change OpenBSD's ufs (or whatever fs he
 wishes to use) to plug in random small numbers for st_size instead of
 what it does now.  He could even do this on a 2nd system which exports
 via NFS, to isolate the code under test.

He only need to assign 

statb.st_size = 0;

directly after calling stat() in the scandir() code and them make it work again 
:-)


Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: OT: extremely poor experience with Sun Download

2007-06-14 Thread Richard L. Hamilton
 Intending to experiment with ZFS, I have been
 struggling with what  
 should be a simple download routine.
 
 Sun Download Manager leaves a great deal to be
 desired.
 
 In the Online Help for Sun Download Manager there's a
 section on  
 troubleshooting, but if it causes *anyone* this much
 trouble
 http://fuzzy.wordpress.com/2007/06/14/sundownloadmana
 gercrap/ then  
 it should, surely, be fixed.
 
 Sun Download Manager -- a FORCED step in an
 introduction to  
 downloadable software from Sun -- should be PROBLEM
 FREE in all  
 circumstances. It gives an extraordinarily poor first
 impression.
 
 If it can't assuredly be fixed, then we should not be
 forced to use it.
 
 (True, I might have ordered rather than downloaded a
 DVD, but Sun  
 Download Manager has given such a poor impression
 that right now I'm  
 disinclined to pay.)

For trying out zfs, you could always request the free Starter Kit DVD
at http://www.opensolaris.org/kits/ which  contains the
SXCE, Nexenta, Belenix and Schillix distros (all newer than Solaris 10).

Beyond that, while I'm sure you're right about that providing a poor
first impression, I guess I'm too old to have much sympathy for something
taking minutes rather than seconds of attention being a barrier to entry.
Yes, the download experience should be vastly improved, but if you let
that stop you, I wonder if you're all that interested in the first place.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS ditto 'mirroring' on JAOBOD ? Please, pretty please!

2007-06-14 Thread Paul Hedderly
Strikes me that at the moment Sun/ZFS team is missing a great opportunity.

Imagine Joe bloggs has a historical machine with Just Any Old Bunch Of Discs... 
(it's not me, no really).

He doesn't want to have to think too hard about pairing them up in mirrors or 
in raids - and sometimes they die or are just too small so need to get swapped 
out - or maybe they are iSCSI/AoE targets that might disappear (say the 'spare 
space' on a thousand desktop PC's...)

What Joe really wants to say to ZFS is: Here is a bunch of discs. Use them any 
way you like - but I'm setting 'copies=2' or 'stripes=5' and 'parity=2' so you 
just go allocating space on any of these discs trying to make sure I always 
have resilliance at the data level.

Now I can do that at the moment - well the copies/ditto kind anyway - but if I 
lose or remove one of the discs, zfs will not start the zpool. [i]That 
sucks!!![/i]

Because... if one disc has gone from a bunch of 10 or so, and I have all my 
data and metadata using dittos, then the data that was on that disc is 
replicated on the others - so losing one disc is not a problem (unless there 
wasn't space to store all the copies on the other discs, I know) but zfs should 
be able to start that zpool and give me the option to reditto the data that has 
lost copies on the dead/removed disc.

So I get nice flexible mirroring by just throwing a JAOBOD at zfs and it does 
all the hard work.

I really cant see this being difficult - but I guess it is dependant on the 
zpool remove vdev functionality being complete.

--
Paul
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X.5 Leopard: zfs works; sudo kextload /System/Library/zfs.kext

2007-06-14 Thread Jan Spitalnik

Hi,

On 14.6.2007, at 9:15, G.W. wrote:

If someone knows how to modify Extensions.kextcache and  
Extensions.mkext, please let me know.  After the bugs are worked  
out, Leopard should be a pretty good platform.


You can recreate the kext cache like this:

kextcache -k /System/Library/Extensions/

Also as far as zfs.kext loading is concerned you can create a  
StartupItem that will kexload it early in the boot and finder will  
not have problems with unknown device :-)


see
http://www.oreillynet.com/pub/a/mac/2003/10/21/startup.html

-Spity



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Boot manual setup in b65

2007-06-14 Thread Douglas Atique
  Now that I know *what*, could you perhaps explain
 to my *why*? I understood zpool import and export
 operations much as mount and unmount, like maybe some
 checks on the integrity of the pool and updates to
 some structure on the OS to maintain the
 imported/exported state of that pool. But now I
 suspect this state information is in fact maintained
 in the pool itself. Does this make sense?
 
 In short, once a pool is exported, it's not
 available/visible for live 
 usage, even after reboot.
 
Do you think this panic when the root pool is not visible is a bug? Should I 
file one?

  In that case, may I suggest that you add a note to
 the manual
 (http://www.opensolaris.org/os/community/zfs/boot/zfsb
 oot-manual/) stating that the pool should not be
 exported prior to booting off it?

 done
Perhaps you could include a link to this discussion thread, too (for those who 
want more information).

-- Doug
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can you create a degraded raidz vdev?

2007-06-14 Thread Paul Hedderly
I have 6 400GB discs and want to make two RAIDZ 2/1 vdevs out of them (ie 
2stripe 1parity).

The problem is that 4 are in use... so I want to do something like:

  zpool create datadump raidz c1t0d0 c1t1d0 missing

Then move and bunch of data into datadump, to free up another two discs, then

  zpool add datadump raidz c1t2d0 c1t3d0 missing

Now move the rest of the data over, freeing the last two drives which I would 
then add as spares to let zfs complete the parity of those vdevs.

Is this currently possible?

--
Cheers
Paul
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] different filesystem size after raidz creation

2007-06-14 Thread Ronny Kopischke
Hi,

after creation of a raidz2 pool with

# zpool create -f mypool raidz2 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0

the commands zpool list and df showing different sizes for the created fs

# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool  340G194K340G 0%  ONLINE -

# df -h
mypool 199G43K   199G 1%/mypool

What can explain this?


Thanks Ronny



smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ditto 'mirroring' on JAOBOD ? Please, pretty please!

2007-06-14 Thread Mario Goebbels
A bunch of disks of different sizes will make it a problem. I wanted to
post that idea to the mailing list before, but didn't do so, since it
doesn't make too much sense.

Say you have two disks, one 50GB and one 100GB, part of your data can
only be ditto'd within the upper 50GB of the larger disk. If the larger
one fails, it takes about 25GB of gross data with it, since the ditto
blocks couldn't be spread to different disks.

More than two disks adds to the chaos. It may reduce the gross data loss
of slack ditto blocks (slack as in space that can't be made physically
redundant), still isn't too redundant.

Note that I say gross data, because it might just happen that a lot of
your files may have most of its blocks ditto'd across physical disk, but
not all of them. That one specific block in file A that's ditto'd on the
same physical disk instead of being spread might just ruin your day.

-mg

 Strikes me that at the moment Sun/ZFS team is missing a great opportunity.
 
 Imagine Joe bloggs has a historical machine with Just Any Old Bunch Of 
 Discs... (it's not me, no really).
 
 He doesn't want to have to think too hard about pairing them up in mirrors or 
 in raids - and sometimes they die or are just too small so need to get 
 swapped out - or maybe they are iSCSI/AoE targets that might disappear (say 
 the 'spare space' on a thousand desktop PC's...)
 
 What Joe really wants to say to ZFS is: Here is a bunch of discs. Use them 
 any way you like - but I'm setting 'copies=2' or 'stripes=5' and 'parity=2' 
 so you just go allocating space on any of these discs trying to make sure I 
 always have resilliance at the data level.
 
 Now I can do that at the moment - well the copies/ditto kind anyway - but if 
 I lose or remove one of the discs, zfs will not start the zpool. [i]That 
 sucks!!![/i]
 
 Because... if one disc has gone from a bunch of 10 or so, and I have all my 
 data and metadata using dittos, then the data that was on that disc is 
 replicated on the others - so losing one disc is not a problem (unless there 
 wasn't space to store all the copies on the other discs, I know) but zfs 
 should be able to start that zpool and give me the option to reditto the data 
 that has lost copies on the dead/removed disc.
 
 So I get nice flexible mirroring by just throwing a JAOBOD at zfs and it 
 does all the hard work.
 
 I really cant see this being difficult - but I guess it is dependant on the 
 zpool remove vdev functionality being complete.



signature.asc
Description: This is a digitally signed message part
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ditto 'mirroring' on JAOBOD ? Please, pretty please!

2007-06-14 Thread Will Murnane

On 6/14/07, Paul Hedderly [EMAIL PROTECTED] wrote:

What Joe really wants to say to ZFS is: Here is a bunch of discs. Use them any way 
you like - but I'm setting 'copies=2' or 'stripes=5' and 'parity=2' so you just go 
allocating space on any of these discs trying to make sure I always have resilliance at 
the data level.


I can think of one good way to mirror across several different-sized
disks.  Put the disk blocks in some order, then snip down the middle.
Mirror the two halves.  If more redundancy is needed, cut in N pieces,
and mirror across the N chunks.

Attached (if the list lets them through, anyways) is a diagram of what
I mean by snip down the middle.  The drives.png shows the blocks
of each disk in a different color.  Then the drives-mirrored.png
shows what happens when you cut that image in half and put the two
pieces on top of each other.  Then pairs of blocks that are vertically
aligned with each other would contain the same data.  You could do
this manually, or using a script; carve out blocks of disks such that
they line up with each other, then add mirror vdevs with those
matching chunks to your pool.

Adding another device to this conglomeration would be tricky, though.
I'll have to think about it some more.


I really cant see this being difficult - but I guess it is dependant on the zpool 
remove vdev functionality being complete.


Agreed.  From a Joe Blog point of view, if you know a disk is going
bad, it'd be nice to be able to tell ZFS to get blocks off it rather
than have to replace it.

On 6/14/07, Mario Goebbels [EMAIL PROTECTED] wrote:

Say you have two disks, one 50GB and one 100GB, part of your data can
only be ditto'd within the upper 50GB of the larger disk. If the larger
one fails, it takes about 25GB of gross data with it, since the ditto
blocks couldn't be spread to different disks.

Well, yes.  If you don't have enough disk space to mirror the largest
disk in your array, it's not going to work out very well.  But suppose
you had two 100GB disks and one 50GB disk.  That's enough disk space
that you should be able to get 125GB of mirrored space out of it, but
conventional wisdom says leave the 50GB disk out of it and live with
100.

Will
attachment: drives.pngattachment: drives-mirrored.png___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Importing pool from readonly device? (as in ZFS fun!)

2007-06-14 Thread Mario Goebbels
Trying some funky experiments, based on hearing about this readonly ZFS
in MacOSX, I'm kidding around with creating file based pools and then
burning them to a CDROM. When running zpool import, it does find the
pool on the CD, but then warns about a read-only device, followed by a
core dump.

[EMAIL PROTECTED]:~/LargeFiles  zpool import -o readonly testpool
internal error: Read-only file system
Abort (core dumped)
[EMAIL PROTECTED]:~/LargeFiles  

Does ZFS still write to the zpool when mounted with readonly? If not,
why would a readonly device pose a problem? Apart from being gimmicky
(Hey, escalator sorting on a CDROM).

Now, if I had three DVD burners and some DVD-RWs, I would try a
DVD-RAID. :V

Thanks.
-mg




signature.asc
Description: This is a digitally signed message part
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Importing pool from readonly device? (as in ZFS fun!)

2007-06-14 Thread Tim Foster
Hi Mario,

On Thu, 2007-06-14 at 15:23 -0100, Mario Goebbels wrote:
 [EMAIL PROTECTED]:~/LargeFiles  zpool import -o readonly testpool
 internal error: Read-only file system
 Abort (core dumped)
 [EMAIL PROTECTED]:~/LargeFiles  

Interesting, I've just filed 6569720 for this behaviour - thanks for
spotting this! Regardless of whether ZFS supports this, we definitely
shouldn't core dump.

 Does ZFS still write to the zpool when mounted with readonly? 

Yep it does - it needs to mark the pool as being in use and no longer
exported. I think this is done by changing the pool state nvpair in
the vdev_label (if I'm not mistaken - section 1.3.3 in the on-disk
format document covers this.)

cheers,
tim

[1]
http://www.opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf

-- 
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Importing pool from readonly device? (as in ZFS fun!)

2007-06-14 Thread Mario Goebbels

  [EMAIL PROTECTED]:~/LargeFiles  zpool import -o readonly testpool
  internal error: Read-only file system
  Abort (core dumped)
  [EMAIL PROTECTED]:~/LargeFiles  
 
 Interesting, I've just filed 6569720 for this behaviour - thanks for
 spotting this! Regardless of whether ZFS supports this, we definitely
 shouldn't core dump.

I just remember those weirdo RAID5 controllers we had way back in a
different company. It was a pretty expensive thing, with hardware XOR,
battery, cache and all that. Going by that, it wasn't a toy. Yet, when a
disk failed, it dropped into readonly mode.

I guess if someone were to have a concoction like this, ZFS would break
in various ways. That is if someone insists on using hardware RAID.
Can't remember the brand, though.

  Does ZFS still write to the zpool when mounted with readonly? 
 
 Yep it does - it needs to mark the pool as being in use and no longer
 exported. I think this is done by changing the pool state nvpair in
 the vdev_label (if I'm not mistaken - section 1.3.3 in the on-disk
 format document covers this.)

A :(

Solaris doesn't by chance ship with a tool to mount devices as readonly,
but writes modifications into a (temporary) file? Kind of like various
emulators do?

Regards,
-mg


signature.asc
Description: This is a digitally signed message part
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OT: extremely poor experience with Sun Download Manager

2007-06-14 Thread Richard Elling

Graham Perrin wrote:
Intending to experiment with ZFS, I have been struggling with what 
should be a simple download routine.


Sun Download Manager leaves a great deal to be desired.

In the Online Help for Sun Download Manager there's a section on 
troubleshooting, but if it causes *anyone* this much trouble
http://fuzzy.wordpress.com/2007/06/14/sundownloadmanagercrap/ then it 
should, surely, be fixed.


Sun Download Manager -- a FORCED step in an introduction to downloadable 
software from Sun -- should be PROBLEM FREE in all circumstances. It 
gives an extraordinarily poor first impression.


Though it is written in Java, and JavaIO has no concept of file
size limits, the downloads are limited to 2 GBytes.  This makes SDM
totally useless for me.  I filed the bug about 4 years ago, and it
is still not being fixed (yes, they know about it)  I recommend you
use something else, there are many good alternatives.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot manual setup in b65

2007-06-14 Thread Eric Schrock
On Thu, Jun 14, 2007 at 05:17:36AM -0700, Douglas Atique wrote:

 Do you think this panic when the root pool is not visible is a bug?
 Should I file one?
 

No.  There is nothing else the OS can do when it cannot mount the root
filesystem.  That being said, it should have a nicer message (using
FMA-style knowledge articles) that tell you what's actually going
wrong.  There is already a bug filed against this failure mode.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OT: extremely poor experience with Sun Download Manager

2007-06-14 Thread Richard Elling

more background below...

Richard Elling wrote:

Graham Perrin wrote:
Intending to experiment with ZFS, I have been struggling with what 
should be a simple download routine.


Sun Download Manager leaves a great deal to be desired.

In the Online Help for Sun Download Manager there's a section on 
troubleshooting, but if it causes *anyone* this much trouble
http://fuzzy.wordpress.com/2007/06/14/sundownloadmanagercrap/ then 
it should, surely, be fixed.


Sun Download Manager -- a FORCED step in an introduction to 
downloadable software from Sun -- should be PROBLEM FREE in all 
circumstances. It gives an extraordinarily poor first impression.


Though it is written in Java, and JavaIO has no concept of file
size limits, the downloads are limited to 2 GBytes.  This makes SDM
totally useless for me.  I filed the bug about 4 years ago, and it
is still not being fixed (yes, they know about it)  I recommend you
use something else, there are many good alternatives.


The important feature is the ability to restart a download in the middle.
Downloads cost real money, so if you have to restart from the beginning,
then it costs even more money.  Again, there are several good downloaders
out there which offer this feature.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OT: extremely poor experience with Sun Download Manager

2007-06-14 Thread Michael Schuster

People,

indeed, even though interesting and a problem, this is OT. I suggest that 
everyone who has trouble with SDM address it to the people who actually 
work on it - especially if you're a (potential) customer.


cheers
Michael

Richard Elling wrote:

more background below...

Richard Elling wrote:

Graham Perrin wrote:
Intending to experiment with ZFS, I have been struggling with what 
should be a simple download routine.


Sun Download Manager leaves a great deal to be desired.

In the Online Help for Sun Download Manager there's a section on 
troubleshooting, but if it causes *anyone* this much trouble
http://fuzzy.wordpress.com/2007/06/14/sundownloadmanagercrap/ then 
it should, surely, be fixed.


Sun Download Manager -- a FORCED step in an introduction to 
downloadable software from Sun -- should be PROBLEM FREE in all 
circumstances. It gives an extraordinarily poor first impression.


Though it is written in Java, and JavaIO has no concept of file
size limits, the downloads are limited to 2 GBytes.  This makes SDM
totally useless for me.  I filed the bug about 4 years ago, and it
is still not being fixed (yes, they know about it)  I recommend you
use something else, there are many good alternatives.


The important feature is the ability to restart a download in the middle.
Downloads cost real money, so if you have to restart from the beginning,
then it costs even more money.  Again, there are several good downloaders
out there which offer this feature.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



--
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OT: extremely poor experience with Sun Download Manager

2007-06-14 Thread John Martinez


On Jun 14, 2007, at 8:58 AM, Michael Schuster wrote:


People,

indeed, even though interesting and a problem, this is OT. I  
suggest that everyone who has trouble with SDM address it to the  
people who actually work on it - especially if you're a (potential)  
customer.


Michael, for the sake of others who aren't familiar with Sun's  
practices, how does one do that? Go to Sunsolve?


thanks,
-john
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OT: extremely poor experience with Sun Download Manager

2007-06-14 Thread Michael Schuster

John Martinez wrote:


On Jun 14, 2007, at 8:58 AM, Michael Schuster wrote:


People,

indeed, even though interesting and a problem, this is OT. I suggest 
that everyone who has trouble with SDM address it to the people who 
actually work on it - especially if you're a (potential) customer.


Michael, for the sake of others who aren't familiar with Sun's 
practices, how does one do that? Go to Sunsolve?


I was afraid someone would ask :-)

I'm sorry I have no idea. In general, there should be a feedback mechanism 
attached to SDM. I've never used it myself, so I just had a look at 
http://www.sun.com/download/sdm/index.xml, and would suggest you use the 
'contact' link (bottom left). I realise it's probably very un-SDM-specific ...


HTH
--
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: OT: extremely poor experience with Sun Download

2007-06-14 Thread Graham Perrin

On 14 Jun 2007, at 12:22, Richard L. Hamilton wrote:


I wonder if you're all that interested in the first place.


I'm definitely interested but at
http://www.sun.com/download/faq.xml#q1 Sun shout (with an  
exclamation mark)



Use the Sun Download Manager (SDM) for all your SDLC downloads!


Frankly, I was fairly busy with the thirty-odd steps for what should  
have been a simple download. So busy with those steps, that I didn't  
find time to contradict what was being shouted from the FAQ.


The other download methods are not obvious.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] OT: SDM - thanks, taking it off-list now

2007-06-14 Thread Graham Perrin

On 14 Jun 2007, at 16:58, Michael Schuster wrote:


People,

indeed, even though interesting and a problem, this is OT. I  
suggest that everyone who has trouble with SDM address it to the  
people who actually work on it - especially if you're a (potential)  
customer.


cheers
Michael


Thanks to all for the responses. Going off-list now.

It's not (yet) clear to me whether I can discover the required URLs  
without first downloading then opening the .jnlp files in a text  
editor. But I'm not inviting responses from the list, instead I'll  
seek advice from those who have already contributed.



I was afraid someone would ask :-)


:-) LOL

Regards
Graham


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] different filesystem size after raidz creation

2007-06-14 Thread Eric Schrock
See:

6308817 discrepancy between zfs and zpool space accounting

- Eric

On Thu, Jun 14, 2007 at 02:22:35PM +0200, Ronny Kopischke wrote:
 Hi,
 
 after creation of a raidz2 pool with
 
 # zpool create -f mypool raidz2 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0
 
 the commands zpool list and df showing different sizes for the created fs
 
 # zpool list
 NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
 mypool  340G194K340G 0%  ONLINE -
 
 # df -h
 mypool 199G43K   199G 1%/mypool
 
 What can explain this?
 
 
 Thanks Ronny
 



 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS API (again!), need quotactl(7I)

2007-06-14 Thread Matthew Ahrens

Pretorious wrote:

Issue with statvfs()


# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
mypool 108M   868M  26.5K  /mypool
mypool/home108M   868M  27.5K  /mypool/home
mypool/home/user-2 108M   868M   108M  /mypool/home/user-2

# df -h
Filesystem size   used  avail capacity  Mounted on
...
mypool 976M26K   868M 1%/mypool   
 used not in sync with 'zfs list'
mypool/home976M26K   868M 1%/mypool/home
mypool/home/user-2 976M   108M   868M12%/mypool/home/user-2

or my interpretation of zfs list incorrect ?


Yeah, df's used is zfs list's REFER.  zfs list's used takes into 
account space used by all descendant filesystems/zvols.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can you create a degraded raidz vdev?

2007-06-14 Thread Matthew Ahrens

Chris Csanady wrote:

On 6/14/07, Paul Hedderly [EMAIL PROTECTED] wrote:


Is this currently possible?


You may be able to do this my specifying a sparse file for the last
device, and then immediately issuing a zpool offline of it after the
pool is created.  It seems to work, and I was able to create a pool
that way with no complaints.

You can create a 400GB sparse file with something like:

dd if=/dev/zero bs=1024k count=1 seek=40 of=/path/to/vdev_file


Alternatively:
mkfile -n 400g

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OT: extremely poor experience with Sun Download Manager

2007-06-14 Thread Frank Cusack
On June 14, 2007 11:16:00 AM +0100 Graham Perrin [EMAIL PROTECTED] 
wrote:

If it can't assuredly be fixed, then we should not be forced to use it.


You're not forced to use it.  I use wget just fine.
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ditto 'mirroring' on JAOBOD ? Please, pretty please!

2007-06-14 Thread Matthew Ahrens

Paul Hedderly wrote:

Now I can do that at the moment - well the copies/ditto kind anyway - but
if I lose or remove one of the discs, zfs will not start the zpool.
[i]That sucks!!![/i]


Agreed, that is a bug (perhaps related to 6540322).

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ditto 'mirroring' on JAOBOD ? Please, pretty please!

2007-06-14 Thread Richard Elling

Paul Hedderly wrote:

Strikes me that at the moment Sun/ZFS team is missing a great opportunity.

Imagine Joe bloggs has a historical machine with Just Any Old Bunch Of Discs... 
(it's not me, no really).

He doesn't want to have to think too hard about pairing them up in mirrors or 
in raids - and sometimes they die or are just too small so need to get swapped 
out - or maybe they are iSCSI/AoE targets that might disappear (say the 'spare 
space' on a thousand desktop PC's...)

What Joe really wants to say to ZFS is: Here is a bunch of discs. Use them any way 
you like - but I'm setting 'copies=2' or 'stripes=5' and 'parity=2' so you just go 
allocating space on any of these discs trying to make sure I always have resilliance at 
the data level.

Now I can do that at the moment - well the copies/ditto kind anyway - but if I 
lose or remove one of the discs, zfs will not start the zpool. [i]That 
sucks!!![/i]

Because... if one disc has gone from a bunch of 10 or so, and I have all my 
data and metadata using dittos, then the data that was on that disc is 
replicated on the others - so losing one disc is not a problem (unless there 
wasn't space to store all the copies on the other discs, I know) but zfs should 
be able to start that zpool and give me the option to reditto the data that has 
lost copies on the dead/removed disc.

So I get nice flexible mirroring by just throwing a JAOBOD at zfs and it does 
all the hard work.


There is a fundamental difference between mirroring and ditto blocks.  The 
allocation
of data onto devices is guaranteed to be on both sides of a mirror.  The 
allocation of
ditto blocks depends on the available space -- two or more copies of the data 
could be
placed on the same device.  If that device is lost, then data is lost.

I expect the more common use of ditto blocks will be for single-disk systems 
such as
laptops.  This is a good thing for laptops.


I really cant see this being difficult - but I guess it is dependant on the zpool 
remove vdev functionality being complete.


There is another issue here.  As you note, figuring out the optimal placement 
of data
for a given collection of devices can be more work than we'd like.  We can 
solve this
problem using proven mathematical techniques.  If such a wizard was available, 
would
you still want to go down the path of possible data loss due to a single device 
failure?
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Panics

2007-06-14 Thread Andy Dishong
I have a problem with one of my zfs pools everytime I import it I get the error 
below.  I can not destroy it because it will not allow me to import.  I have 
tried trashing the cache file but did not help, is there a way to destory the 
config then i can start over???  Also up to date on patches, thanks Andy

panic[cpu65]/thread=2a104299cc0: assertion failed: dmu_read(os, 
smo-smo_object, offset, size, entry_map) == 0 (0x5 == 0x0), file: 
../../common/fs/zfs/space_map.c, line: 307

02a104299000 genunix:assfail3+94 (7b3866c8, 5, 7b386708, 0, 7b386710, 133)
  %l0-3: 2000 0133  018da400
  %l4-7:  018a8000 011ee000 
02a1042990c0 zfs:space_map_load+1a4 (6000aaae378, 70968058, 1000, 
6000aaae048, 3c800, 1)
  %l0-3: 1050 06000a20b000  7b354ad0
  %l4-7: 7b35486c 7fff 7fff 1000
02a104299190 zfs:metaslab_activate+3c (6000aaae040, 4000, 
c000, 131fce64, 6000aaae040, c000)
  %l0-3: 7b386000  f806a600 
  %l4-7: 70968000 060009c65240 06000a7a0fc0 06000a909c88
02a104299240 zfs:metaslab_group_alloc+1bc (9fe68000, 200, 4000, 
9fe68000, 6000a398c08, )
  %l0-3: 0002 06000a909c90 0001 06000aaae040
  %l4-7: 8000  4ff34000 06000a398c28
02a104299320 zfs:metaslab_alloc_dva+114 (0, 9fe68000, 6000a398c08, 200, 
6000a7a0fc0, 88a5)
  %l0-3: 0001 0002 0003 060009ca5ea8
  %l4-7:  06000a909c88  06000a909c88
02a1042993f0 zfs:metaslab_alloc+2c (0, 200, 6000a398c08, 3, 88a5, 0)
  %l0-3: 06000a398c68 06000a317000 709686a0 
  %l4-7: 06000a4953be 0002 060009c65240 0001
02a1042994a0 zfs:zio_dva_allocate+4c (6000a3d3200, 7b3675a8, 6000a398c08, 
70968508, 70968400, 20001)
  %l0-3: 70968400 07030001 07030001 
  %l4-7:  01913800 0003 0007
02a104299550 zfs:zio_write_compress+1ec (6000a3d3200, 23e20b, 23e000, 
10001, 3, 6000a398c08)
  %l0-3:   0001 0200
  %l4-7:  0001 fc00 0001
02a104299620 zfs:zio_wait+c (6000a3d3200, 60009c65240, 7, 6000a3d3460, 3, 
88a5)
  %l0-3:  7b33f7d0 06000a398bc0 060009fffcf0
  %l4-7: 02a1042997c0 0002 0002 060009ffd908
02a1042996d0 zfs:dmu_objset_sync+12c (6000a398bc0, 60009eae1c0, 1, 1, 
60009fffcf0, 0)
  %l0-3: 06000a398c08  000e 4fd1
  %l4-7: 06000a398cc0 0020 06000a398ca0 06000a398d20
02a1042997e0 zfs:dsl_dataset_sync+c (600060321c0, 60009eae1c0, 60006032250, 
600071b3378, 600071b3378, 600060321c0)
  %l0-3: 0001 0007 0600071b33f8 0001
  %l4-7: 060006032248  0600071b4168 
02a104299890 zfs:dsl_pool_sync+64 (600071b32c0, 88a5, 600060321c0, 
60009f46480, 6000a3ff4c0, 6000a3ff4e8)
  %l0-3:  060009c65600 060009eae1c0 0600071b3458
  %l4-7: 0600071b3428 0600071b33f8 0600071b3368 06000a3d3200
02a104299940 zfs:spa_sync+1b0 (60009c65240, 88a5, 0, 0, 2a104299cc4, 1)
  %l0-3: 01914788  060009c65328 060009f46480
  %l4-7:  06000a7a1500 0600071b32c0 060009c653c0
02a104299a00 zfs:txg_sync_thread+134 (600071b32c0, 88a5, 0, 2a104299ab0, 
600071b33d0, 600071b33d2)
  %l0-3: 0600071b33e0 0600071b3390  0600071b3398
  %l4-7: 0600071b33d6 0600071b33d4 0600071b3388 88a6

syncing file systems... 6 4 done
dumping to /dev/md/dsk/d1, offset 215744512, content: kernel
100% done: 94326 pages dumped, compression ratio 5.71, dump succeeded
rebooting...
Resetting...


---
Andrew Dishong
Solutions Architect
Email: [EMAIL PROTECTED]
Access Line: 877.226.8297
Cell: 303-808-5884
Text Page: [EMAIL PROTECTED]
Personal WebPage: http://webhome.central/adishong

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can you create a degraded raidz vdev?

2007-06-14 Thread Ruben Wisniewski
Hi Paul,

I think I have a smiliar problem, I have 4 disks, two empty, two full
and want to create a RAIDZ1 from these disk.

But I'm new to zfs, maybe you can explain me how do you done it, if it
runs :)

Thanks in advance :)


Greetings Cyron

 I have 6 400GB discs and want to make two RAIDZ 2/1 vdevs out of them
 (ie 2stripe 1parity).
 
 The problem is that 4 are in use... so I want to do something like:
 
   zpool create datadump raidz c1t0d0 c1t1d0 missing
 
 Then move and bunch of data into datadump, to free up another two
 discs, then
 
   zpool add datadump raidz c1t2d0 c1t3d0 missing
 
 Now move the rest of the data over, freeing the last two drives which
 I would then add as spares to let zfs complete the parity of those
 vdevs.


signature.asc
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can not export ZFS pool

2007-06-14 Thread Andy Dishong
Is there a way to get past this I can not re-create until I export it? 

zpool export -f zonesHA2
cannot iterate filesystems: I/O error

zpool status
  pool: zonesHA2
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zonesHA2 ONLINE   0 048
  c1t5006016B30600E46d8  ONLINE   0 048

errors: 0 data errors, use '-v' for a list


errors: The following persistent errors have been detected:

  DATASET  OBJECT  RANGE
  1a   0   lvl=4294967295 blkid=0




---
Andrew Dishong
Solutions Architect
Sun Client Services - JPMorgan Account Team
Email: [EMAIL PROTECTED]
Access Line: 877.226.8297
Cell: 303-808-5884
Text Page: [EMAIL PROTECTED]
Personal WebPage: http://webhome.central/adishong

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs reports small st_size for directories?

2007-06-14 Thread Bill Sommerfeld
On Thu, 2007-06-14 at 09:09 +0200, [EMAIL PROTECTED] wrote:
 The implication of which, of course, is that any app build for Solaris 9
 or before which uses scandir may have picked up a broken one.

or any app which includes its own copy of the BSD scandir code, possibly
under a different name, because not all systems support scandir..

it can be impossible to fix all copies of a bug which has been cut 
pasted too many times... 

- Bill




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best use of 4 drives?

2007-06-14 Thread Ruben Wisniewski
Hi Rick,

what do you think about this configuration:

Part all disks like this
7GiB
493GiB

Make a RAIDz1 out of the 493GiB partitions and a RAID5 out of the 7GiB
partitions. Create a swap, and the root in the RAID5, the dirs with the
user data in the ZFS storage.

Backup the / daily to the ZFS :)


Greetings Cyron

 I'm putting together a NexentaOS (b65)-based server that has 4 500 GB
 drives on it. Currently it has two, set up as a ZFS mirror. I'm able
 to boot Nexenta from it, and it seems to work ok. But, as I've
 learned, the mirror is not properly redundant, and so I can't just
 have a drive fail (when I pull one, the OS ends up hanging, and even
 if I replace it, I have to reboot. I asked about this on the Nexenta
 list: http://www.gnusolaris.org/phpbb/viewtopic.php?t=6233).
 
 I think the problems stem from limitations in booting from ZFS. So,
 I'm going to re-do the drive allocations, but I'm not sure the best
 way to go. I'm hoping this list will give me some advice.
 
 I was thinking I'd put the basic OS on one drive (possibly a ZFS
 pool), and then make a RAID-Z storage pool from the other three
 drives. The RAID-Z pool would contain filesystems for /usr, /home,
 etc. The drawback to this approach is that a 500 GB drive is far too
 large for the basic OS, and seems to end up wasted.
 
 An alternative would be to partition one drive, give it 20 GB for the
 OS and the rest for use by ZFS to allocated to a storage pool, but at
 this point ZFS isn't working with the raw device, and so I'm not sure
 what limitations this may place on me.
 
 BTW, I don't mind if the boot drive fails, because it will be fairly
 easy to replace, and this server is only mission-critical to me and
 my friends.
 
 So...suggestions? What's a good way to utilize the power and glory of
 ZFS in a 4x 500 GB system, without unnecessary waste?
 
 Thanks much!
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--- fortune ---
Alle Menschen, die sonst nicht verhindern, wollen nun Aids vernindern.
-- Werner Schneider


signature.asc
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Btrfs, COW for Linux [somewhat OT]

2007-06-14 Thread David Magda

Hello,

Somewhat off topic, but it seems that someone released a COW file  
system for Linux (currently in 'alpha'):


* Extent based file storage (2^64 max file size)
* Space efficient packing of small files
* Space efficient indexed directories
* Dynamic inode allocation
* Writable snapshots
* Subvolumes (separate internal filesystem roots)
- Object level mirroring and striping
* Checksums on data and metadata (multiple algorithms available)
- Strong integration with device mapper for multiple device support
- Online filesystem check
* Very fast offline filesystem check
- Efficient incremental backup and FS mirroring

http://lkml.org/lkml/2007/6/12/242
http://oss.oracle.com/~mason/btrfs/

Via Storage Mojo:

http://storagemojo.com/?p=478

--
David Magda dmagda at ee.ryerson.ca
Because the innovator has for enemies all those who have done well under
the old conditions, and lukewarm defenders in those who may do well
under the new. -- Niccolo Machiavelli, _The Prince_, Chapter VI
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Btrfs, COW for Linux [somewhat OT]

2007-06-14 Thread mike

it's about time. this hopefully won't spark another license debate,
etc... ZFS may never get into linux officially, but there's no reason
a lot of the same features and ideologies can't make it into a
linux-approved-with-no-arguments filesystem...

as a more SOHO user i like ZFS mainly for it's COW and integrity, and
being able to add more storage later on. the latter is nothing new
though. but telling the world who needs hardware raid? software can
do it much better is a concept that excites me; it would be great for
linux to have something like that as well that could be merged into
the kernel without any debate.

On 6/14/07, David Magda [EMAIL PROTECTED] wrote:

Hello,

Somewhat off topic, but it seems that someone released a COW file
system for Linux (currently in 'alpha'):

   * Extent based file storage (2^64 max file size)
   * Space efficient packing of small files
   * Space efficient indexed directories
   * Dynamic inode allocation
   * Writable snapshots
   * Subvolumes (separate internal filesystem roots)
   - Object level mirroring and striping
   * Checksums on data and metadata (multiple algorithms available)
   - Strong integration with device mapper for multiple device support
   - Online filesystem check
   * Very fast offline filesystem check
   - Efficient incremental backup and FS mirroring

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS and Tar/Star Performance

2007-06-14 Thread eric kustarz


On Jun 13, 2007, at 9:22 PM, Siegfried Nikolaivich wrote:



On 12-Jun-07, at 9:02 AM, eric kustarz wrote:
Comparing a ZFS pool made out of a single disk to a single UFS  
filesystem would be a fair comparison.


What does your storage look like?


The storage looks like:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz1ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0
c0t6d0  ONLINE   0 0 0

All disks are local SATA/300 drives with SATA framework on marvell  
card.  The SATA drives are consumer drives with 16MB cache.


I agree it's not a fair comparison, especially with raidz over 6  
drives.  However, a performance difference of 10x is fairly large.


I do not have a single drive available to test ZFS with and compare  
it to UFS, but I have done similar tests in the past with one ZFS  
drive without write cache, etc. vs. a UFS drive of the same brand/ 
size.  The difference was still on the order of 10x slower for the  
ZFS drive over NFS.  What could cause such a large difference?  Is  
there a way to measure NFS_COMMIT latency?




You should do the comparison on a single drive.  For ZFS, enable the  
write cache as its safe to do so.  For UFS, disable the write cache.


Make sure you're on non-debug bits.

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Btrfs, COW for Linux [somewhat OT]

2007-06-14 Thread Frank Cusack

On June 14, 2007 3:57:55 PM -0700 mike [EMAIL PROTECTED] wrote:

as a more SOHO user i like ZFS mainly for it's COW and integrity, and


huh.  As a SOHO user, why do you care about COW?

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Btrfs, COW for Linux [somewhat OT]

2007-06-14 Thread mike

because i don't want bitrot to destroy the thousands of pictures and
memories i keep? i keep important personal documents, etc. filesystem
corruption is not a feature to me. perhaps i spoke incorrectly but i
consider COW to be one of the reasons a filesystem can keep itself in
check, the disk write will be transactional and guaranteed if it says
successful, correct?

i still plan on using offsite storage services to maintain the
physical level of redundancy (house burns down, equipment is stolen,
HDs will always die at some point, etc.) but as a user who has had
corruption happen many times (FAT32, NTFS, XFS, JFS) it is encouraging
to see more options that put emphasis on integrity...


On 6/14/07, Frank Cusack [EMAIL PROTECTED] wrote:

On June 14, 2007 3:57:55 PM -0700 mike [EMAIL PROTECTED] wrote:
 as a more SOHO user i like ZFS mainly for it's COW and integrity, and

huh.  As a SOHO user, why do you care about COW?

-frank


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Btrfs, COW for Linux [somewhat OT]

2007-06-14 Thread Frank Cusack

On June 14, 2007 4:40:18 PM -0700 mike [EMAIL PROTECTED] wrote:

because i don't want bitrot to destroy the thousands of pictures and
memories i keep?


COW doesn't stop that.


i keep important personal documents, etc. filesystem
corruption is not a feature to me. perhaps i spoke incorrectly but i
consider COW to be one of the reasons a filesystem can keep itself in
check, the disk write will be transactional and guaranteed if it says
successful, correct?


Yes, but there are many ways to get transactions, e.g. journalling.

-frank
ps. top posting especially sucks when you ask multiple questions.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best use of 4 drives?

2007-06-14 Thread Ian Collins
Rick Mann wrote:

 BTW, I don't mind if the boot drive fails, because it will be fairly easy to 
 replace, and this server is only mission-critical to me and my friends.

 So...suggestions? What's a good way to utilize the power and glory of ZFS in 
 a 4x 500 GB system, without unnecessary waste?

   
Bung in (add a USB one if you don't have space) a small boot drive and
use all the others for for ZFS.

Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Btrfs, COW for Linux [somewhat OT]

2007-06-14 Thread mike

On 6/14/07, Frank Cusack [EMAIL PROTECTED] wrote:

Yes, but there are many ways to get transactions, e.g. journalling.


ext3 is journaled. it doesn't seem to always be able to recover data.
it also takes forever to fsck. i thought COW might alleviate some of
the fsck needs... it just seems like a more efficient (or guaranteed?)
method of disk commitment. but i am speaking purely from the
sidelines. i don't know all the internals of filesystems, just the
ones that have bitten me in the past.


ps. top posting especially sucks when you ask multiple questions.


yes, sir!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Best use of 4 drives?

2007-06-14 Thread Rick Mann
Ian Collins wrote:

 Bung in (add a USB one if you don't have space) a small boot drive and
 use all the others for for ZFS.

Not a bad idea; I'll have to see where I can put one.

But, I thought I read somewhere that one can't use ZFS for swap. Or maybe I 
read this:

Slices should only be used under the following conditions: 
* The device name is nonstandard. 
* A single disk is shared between ZFS and another file system, such as UFS. 
* A disk is used as a swap or a dump device.

I definitely [i]don't[/i] want to use flash for swap...
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Btrfs, COW for Linux [somewhat OT]

2007-06-14 Thread Frank Cusack

On June 14, 2007 5:07:39 PM -0700 mike [EMAIL PROTECTED] wrote:

On 6/14/07, Frank Cusack [EMAIL PROTECTED] wrote:

Yes, but there are many ways to get transactions, e.g. journalling.


ext3 is journaled. it doesn't seem to always be able to recover data.


zfs is COW.  it isn't always able to recover data.

Like ext3, this is the result of bugs, not a COW vs journalling thing.
Or perhaps in a few ext3 cases, the result of bad data making it to
disk ... but still not related to COW vs journalling.


it also takes forever to fsck.


the journal is supposed to pretty much eliminate fsck time.  you might
be doing a full fsck as opposed to journal playback.


i thought COW might alleviate some of the fsck needs...


It does, but that's not unique to COW.


it just seems like a more efficient (or guaranteed?)
method of disk commitment.


it might be more efficient.  i *think* this depends on your usage pattern.
i think that typically, it does outperform other methods.

anyway, my point is that i didn't think COW was in and of itself a feature
a home or SOHO user would really care about.  it's more an implementation
detail of zfs than a feature.  i'm sure this is arguable.


ps. top posting especially sucks when you ask multiple questions.


yes, sir!


heh

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best use of 4 drives?

2007-06-14 Thread Darren Dunham
 Rick Mann wrote:
 
  BTW, I don't mind if the boot drive fails, because it will be fairly easy 
  to replace, and this server is only mission-critical to me and my friends.
 
  So...suggestions? What's a good way to utilize the power and glory of ZFS 
  in a 4x 500 GB system, without unnecessary waste?

 Bung in (add a USB one if you don't have space) a small boot drive and
 use all the others for for ZFS.

Having the ability to do something like a pivot root would make that
even more useful.

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
  This line left intentionally blank to confuse you. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Btrfs, COW for Linux [somewhat OT]

2007-06-14 Thread Dickon Hood
On Thu, Jun 14, 2007 at 17:19:18 -0700, Frank Cusack wrote:

: anyway, my point is that i didn't think COW was in and of itself a feature
: a home or SOHO user would really care about.  it's more an implementation
: detail of zfs than a feature.  i'm sure this is arguable.

I'm really not sure I agree with you given the way you've put it.  Yes,
the average user doesn't give a damn whether his filesystem is
copy-on-write, journalled, or none of the above if you ask him.  Your
average user *does* care, however, when faced with the consequences, and
*will* complain when things break.

CoW does appear to have some distinct advantages over other methods when
faced with the unreliable hardware we all have to deal with these days.
When software makes it better, everybody wins.

Now all I need is a T2000 that behaves correctly rather than a T2000 that
behaves as if it's missing the magic in /etc/system that it has...

-- 
Dickon Hood

Due to digital rights management, my .sig is temporarily unavailable.
Normal service will be resumed as soon as possible.  We apologise for the
inconvenience in the meantime.

No virus was found in this outgoing message as I didn't bother looking.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best use of 4 drives?

2007-06-14 Thread Bart Smaalders

Ian Collins wrote:

Rick Mann wrote:

BTW, I don't mind if the boot drive fails, because it will be fairly easy to 
replace, and this server is only mission-critical to me and my friends.

So...suggestions? What's a good way to utilize the power and glory of ZFS in a 
4x 500 GB system, without unnecessary waste?

  

Bung in (add a USB one if you don't have space) a small boot drive and
use all the others for for ZFS.

Ian



This is how I run my home server w/ 4 500GB drives - a small
40GB IDE drive provides root  swap/dump device, the 4 500 GB
drives are RAIDZ  contain all the data.   I ran out of drive
bays, so I used one of those 5 1/4 - 3.5 adaptor brackets
to hang the boot drive where a second DVD drive would go...


- Bart




--
Bart Smaalders  Solaris Kernel Performance
[EMAIL PROTECTED]   http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best use of 4 drives?

2007-06-14 Thread Bill Sommerfeld
On Thu, 2007-06-14 at 17:45 -0700, Bart Smaalders wrote:
 This is how I run my home server w/ 4 500GB drives - a small
 40GB IDE drive provides root  swap/dump device, the 4 500 GB
 drives are RAIDZ  contain all the data.   I ran out of drive
 bays, so I used one of those 5 1/4 - 3.5 adaptor brackets
 to hang the boot drive where a second DVD drive would go...

I'm doing something very similar (I actually used your blog post as the
start of my shopping list).  I'd like to replace the boot drive with a
compact flash card but run with zfs root once booted, but haven't had
the time to investigate whether there's enough left over from the
initial zfs root integration to permit the ufs boot/zfs root hack (or
some other pivot hack) to work..

If all you need to put on the CF is grub, a unix binary, and the boot
archive, I should be able to use a relatively small card and still have
enough room for a couple copies of everything.

- Bill


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-14 Thread Bart Smaalders

Ian Collins wrote:

Rick Mann wrote:

Ian Collins wrote:

  

Bung in (add a USB one if you don't have space) a small boot drive and
use all the others for for ZFS.


Not a bad idea; I'll have to see where I can put one.

But, I thought I read somewhere that one can't use ZFS for swap. Or maybe I 
read this:

  

I wouldn't bother, just spec the machine with enough RAM so swap's only
real use is as a dump device.  You can always use a swap file if you
have to.

Ian


If you compile stuff (like opensolaris), you'll want swap space.
Esp. if you use dmake; 30 parallel C++ compilations can use up a
lot of RAM.

- Bart



--
Bart Smaalders  Solaris Kernel Performance
[EMAIL PROTECTED]   http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Best use of 4 drives?

2007-06-14 Thread Ian Collins
Bart Smaalders wrote:
 Ian Collins wrote:
 Rick Mann wrote:
 Ian Collins wrote:

  
 Bung in (add a USB one if you don't have space) a small boot drive and
 use all the others for for ZFS.
 
 Not a bad idea; I'll have to see where I can put one.

 But, I thought I read somewhere that one can't use ZFS for swap. Or
 maybe I read this:

   
 I wouldn't bother, just spec the machine with enough RAM so swap's only
 real use is as a dump device.  You can always use a swap file if you
 have to.

 Ian

 If you compile stuff (like opensolaris), you'll want swap space.
 Esp. if you use dmake; 30 parallel C++ compilations can use up a
 lot of RAM.

I should have said I wouldn't bother with swap on ZFS.

If your compiles use swap, you are either doing too many jobs, or you
don't have enough RAM!

Saying that, I did have one C++ file that used swap on my laptop with 1G
of RAM and that was on a serial make.

Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs reports small st_size for directories?

2007-06-14 Thread Ed Ravin
On Fri, Jun 15, 2007 at 12:27:15AM +0200, Joerg Schilling wrote:
 Bill Sommerfeld [EMAIL PROTECTED] wrote:
 
  On Thu, 2007-06-14 at 09:09 +0200, [EMAIL PROTECTED] wrote:
   The implication of which, of course, is that any app build for Solaris 9
   or before which uses scandir may have picked up a broken one.
 
  or any app which includes its own copy of the BSD scandir code, possibly
  under a different name, because not all systems support scandir..

So far, all the local implementations of the BSD scandir() that I've
found are called scandir().  Some of them are smart enough not to use
st_size, the rest of them are straight copies of the BSD code.

 15 years ago, Novell Netware started to return a fixed size of 512 for all
 directories via NFS. 
 
 If there is still unfixed code, there is no help.

The Novell behavior, commendable as it is, did not break the BSD scandir()
code, because BSD scandir() fails in the other direction, when st_size is
a low number, like less than 24.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: OT: extremely poor experience with Sun Download

2007-06-14 Thread MC
 Intending to experiment with ZFS, I have been
 struggling with what  
 should be a simple download routine.
 
 Sun Download Manager leaves a great deal to be
 desired.
 
 In the Online Help for Sun Download Manager there's a
 section on  
 troubleshooting, but if it causes *anyone* this much
 trouble
 http://fuzzy.wordpress.com/2007/06/14/sundownloadmana
 gercrap/ then  
 it should, surely, be fixed.
 
 Sun Download Manager -- a FORCED step in an
 introduction to  
 downloadable software from Sun -- should be PROBLEM
 FREE in all  
 circumstances. It gives an extraordinarily poor first
 impression.
 
 If it can't assuredly be fixed, then we should not be
 forced to use it.
 
 (True, I might have ordered rather than downloaded a
 DVD, but Sun  
 Download Manager has given such a poor impression
 that right now I'm  
 disinclined to pay.)
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss
 

Sun Download Manager, the split file downloads, and their download tracking 
system (you can't download a file more than once per session, WTF?) are all 
relics of the past.  

Sun would be served well by joining the rest of the internet in the present day 
where people download files from simple http mirrors using existing software on 
their computer.

I'd love for these comments to make it to the right people, but who knows how 
to make that happen.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss