Re: [zfs-discuss] ZFS questions (hybrid HDs)

2006-06-20 Thread Erik Trimble
Richard Elling wrote: Dana H. Myers wrote: What I do not know yet is exactly how the flash portion of these hybrid drives is administered. I rather expect that a non-hybrid-aware OS may not actually exercise the flash storage on these drives by default; or should I say, the flash storage will o

[zfs-discuss] Overview (rollup) of recent activity on zfs-discuss

2006-06-20 Thread Eric Boutilier
For background on what this is, see: http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416 http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200 = zfs-discuss 06/01 - 06/15 = Threads or announcements origi

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Darren Reed
[EMAIL PROTECTED] wrote: Also, options such as "-nomtime" and "-noctime" have been introduced alongside "-noatime" in some free operating systems to limit the amount of meta data that gets written back to disk. Those seem rather pointless. (mtime and ctime generally imply other changes,

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Jeff Bonwick
> I assume ZFS only writes something when there is actually data? Right. Jeff ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Nathan Kroenert
And, this is a worst case, no? If the device itself also does some funky stuff under the covers, and ZFS only writes an update if there is *actually* something to write, then it could be much much longer than 4 years. Actually - That's an interesting. I assume ZFS only writes something when the

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Richard Elling
Dana H. Myers wrote: What I do not know yet is exactly how the flash portion of these hybrid drives is administered. I rather expect that a non-hybrid-aware OS may not actually exercise the flash storage on these drives by default; or should I say, the flash storage will only be available to a h

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Richard Elling
Eric Schrock wrote: On Tue, Jun 20, 2006 at 11:17:42AM -0700, Jonathan Adams wrote: On Tue, Jun 20, 2006 at 09:32:58AM -0700, Richard Elling wrote: Flash is (can be) a bit more sophisticated. The problem is that they have a limited write endurance -- typically spec'ed at 100k writes to any sin

[zfs-discuss] Proposal for new basic privileges related with file system access checks

2006-06-20 Thread Nicolai Johannes
For my Google Summer of Code project for OpenSolaris, my job is to think about new basic privileges. I like to propose five new basic privileges that relate with file system access checks and may be used for daemons like ssh or ssh-agent that (after starting up) never read or write user specifi

Re: [zfs-discuss] ZFS and Flash archives

2006-06-20 Thread Lori Alt
I'm looking into this and will send out an answer in a day or two. Lori Alt Constantin Gonzalez wrote: Hi, I'm currently setting up a demo machine. It would be nice to set up everything the way I like it, including a number of ZFS filesystems, then create a flash archive, then install from th

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Dana H. Myers
Richard Elling wrote: > Erik Trimble wrote: >> Oh, and the newest thing in the consumer market is called "hybrid >> drives", which is a melding of a Flash drive with a Winchester >> drive. It's originally targetted at the laptop market - think a 1GB >> flash memory welded to a 40GB 2.5" hard dri

Re: [zfs-discuss] zfs man pages on Open Solaris

2006-06-20 Thread Cindy Swearingen
D'oh! Thanks for the tip. I was testing the Solaris Express version, which is working fine, here: http://docs.sun.com/app/docs/doc/819-2252/6n4i8rtv2?a=view If you're working with OpenSolaris features, then the Solaris Express docs will more closely correlate than the Solaris 10 man pages. I'l

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Eric Schrock
On Tue, Jun 20, 2006 at 02:18:34PM -0600, Gregory Shaw wrote: > Wouldn't that be: > > 5 seconds per write = 86400/5 = 17280 writes per day > 256 rotated locations for 17280/256 = 67 writes per location per day > > Resulting in (10/67) ~1492 days or 4.08 years before failure? > > That's still

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Gregory Shaw
Wouldn't that be: 5 seconds per write = 86400/5 = 17280 writes per day 256 rotated locations for 17280/256 = 67 writes per location per day Resulting in (10/67) ~1492 days or 4.08 years before failure? That's still a long time, but it's not 100 years. On Jun 20, 2006, at 12:47 PM, Eric Sch

Re: [zfs-discuss] fdsync(5, FSYNC) problem and ZFS

2006-06-20 Thread Neil Perrin
Sean, I'm not sure yet that the bugs below are responsible for this, but I don't know what is either. 15 minutes to do a fdsync is way outside the slowdown usually seen. The footprint for 6413510 is that when a huge amount of data is being written non synchronously and a fsync comes in for the sa

Re: [zfs-discuss] zfs man pages on Open Solaris

2006-06-20 Thread Ricardo Correia
I'm still having problems. The specific link that I'm looking at is http://docs.sun.com/app/docs/doc/816-5175/6mbba7f4u?a=view Which has, for example, a link to zoneadm(1M) with the address http://docs.sun.com/app/docs/doc/REFMAN1M%20Version%205.0 Which gives the "Error: The requested document

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Casper . Dik
>Also, options such as "-nomtime" and "-noctime" have been introduced >alongside "-noatime" in some free operating systems to limit the amount >of meta data that gets written back to disk. Those seem rather pointless. (mtime and ctime generally imply other changes, often to the inode; atime doe

Re: [zfs-discuss] zfs man pages on Open Solaris

2006-06-20 Thread Cindy Swearingen
Hi Ricardo, I just tested most of the links from the zones.5 man page on docs.sun.com and they all seem to be working now. Outages occurred a couple of weeks ago but everything seems to be working now. Please let me know if you have any more problems with docs.sun.com and I'll file a service ti

Re: [zfs-discuss] zfs man pages on Open Solaris

2006-06-20 Thread Ricardo Correia
There are still a few problems in docs.sun.com. If you go to the zones(5) manpage and click on any link, it says "Error: The requested document could not be found." I think there was also a link in one of the zones commands or libc functions that pointed to the wrong manpage, but I don't rememb

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Darren Reed
Jonathan Adams wrote: On Tue, Jun 20, 2006 at 09:32:58AM -0700, Richard Elling wrote: Flash is (can be) a bit more sophisticated. The problem is that they have a limited write endurance -- typically spec'ed at 100k writes to any single bit. The good flash drives use block relocation, spares,

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Bill Moore
On Tue, Jun 20, 2006 at 11:17:42AM -0700, Jonathan Adams wrote: > On Tue, Jun 20, 2006 at 09:32:58AM -0700, Richard Elling wrote: > > Flash is (can be) a bit more sophisticated. The problem is that they > > have a limited write endurance -- typically spec'ed at 100k writes to > > any single bit.

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Eric Schrock
On Tue, Jun 20, 2006 at 11:17:42AM -0700, Jonathan Adams wrote: > On Tue, Jun 20, 2006 at 09:32:58AM -0700, Richard Elling wrote: > > Flash is (can be) a bit more sophisticated. The problem is that they > > have a limited write endurance -- typically spec'ed at 100k writes to > > any single bit.

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Jonathan Adams
On Tue, Jun 20, 2006 at 09:32:58AM -0700, Richard Elling wrote: > Flash is (can be) a bit more sophisticated. The problem is that they > have a limited write endurance -- typically spec'ed at 100k writes to > any single bit. The good flash drives use block relocation, spares, and > write spreadin

[zfs-discuss] Re: nevada_41 and zfs disk partition

2006-06-20 Thread Jürgen Keil
> > What throughput do you get for the full untar (untared size / elapse time) ? > # tar xf thunderbird-1.5.0.4-source.tar 2.77s user > 35.36s system 33% cpu 1:54.19 > > 260M/114 =~ 2.28 MB/s on this IDE disk IDE disk? Maybe it's this sparc ide/ata driver issue: Bug ID: 6421427 Synopsis: netr

Re: [zfs-discuss] ZFS questions

2006-06-20 Thread Richard Elling
Erik Trimble wrote: That is, start out with adding the ability to differentiate between access policy in a vdev. Generally, we're talking only about mirror vdevs right now. Later on, we can consider the ability to migrate data based on performance, but a lot of this has to take into considera

[zfs-discuss] ZFS and Flash archives

2006-06-20 Thread Constantin Gonzalez
Hi, I'm currently setting up a demo machine. It would be nice to set up everything the way I like it, including a number of ZFS filesystems, then create a flash archive, then install from that archive. Will there be any issues with webstart flash and ZFS? Does flar create need to be ZFS aware and

Re: [zfs-discuss] Why is "+" not allowed in a ZFS file system name ?

2006-06-20 Thread Darren J Moffat
Holger Berger wrote: On 6/19/06, Eric Schrock <[EMAIL PROTECTED]> wrote: Simply because we erred on the side of caution. The fewer metachacters, the better. It's easy to change if there's enough interest. You may want to change that since many applications including KDE use ''+' to encode pa

Re: [zfs-discuss] Why is "+" not allowed in a ZFS file system name ?

2006-06-20 Thread Casper . Dik
>On 6/20/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: >> >> >On 6/19/06, Eric Schrock <[EMAIL PROTECTED]> wrote: >> >> Simply because we erred on the side of caution. The fewer metachacters, >> >> the better. It's easy to change if there's enough interest. >> > >> >You may want to change tha

Re: [zfs-discuss] Why is "+" not allowed in a ZFS file system name ?

2006-06-20 Thread Holger Berger
On 6/20/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: >On 6/19/06, Eric Schrock <[EMAIL PROTECTED]> wrote: >> Simply because we erred on the side of caution. The fewer metachacters, >> the better. It's easy to change if there's enough interest. > >You may want to change that since many appl

[zfs-discuss] Re: Why is "+" not allowed in a ZFS file system name ?

2006-06-20 Thread Anton B. Rang
Perhaps this discussion wasn't quite clear. ZFS supports all characters (with the standard UNIX exceptions of null and forward slash) in filenames. It's only the names of ZFS data structures, such as file systems, pools, and snapshots, which are restricted. For instance, @ is used as a separat

[zfs-discuss] query re availability of zfs in S10

2006-06-20 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer
Hi It is my understanding that zfs will be available to pre S10 update 2 customers via patches, ie customers on FCS could install the necessary zfs patches and thereby start using zfs. But there seems to be confusion in regards to whether this is supported or not. Some people say only zfs on

Re: [zfs-discuss] Why is "+" not allowed in a ZFS file system name ?

2006-06-20 Thread Casper . Dik
>On 6/19/06, Eric Schrock <[EMAIL PROTECTED]> wrote: >> Simply because we erred on the side of caution. The fewer metachacters, >> the better. It's easy to change if there's enough interest. > >You may want to change that since many applications including KDE use >''+' to encode paths (replacing

Re: [zfs-discuss] Why is "+" not allowed in a ZFS file system name ?

2006-06-20 Thread Holger Berger
On 6/19/06, Eric Schrock <[EMAIL PROTECTED]> wrote: Simply because we erred on the side of caution. The fewer metachacters, the better. It's easy to change if there's enough interest. You may want to change that since many applications including KDE use ''+' to encode paths (replacing blanks

[zfs-discuss] Re: nevada_41 and zfs disk partition

2006-06-20 Thread Matthew C Aycock
> > What does vmstat look like ? > Also zpool iostat 1. > capacity operationsbandwidth pool used avail read write read write -- - - - - - - tank 291M 9.65G 0 11 110K 694K tank 301M 9.64G

Re: [zfs-discuss] nevada_41 and zfs disk partition

2006-06-20 Thread Roch
What does vmstat look like ? Also zpool iostat 1. Do you have any disk based swap ? One best practice we probably will be coming out with is to configure at least physmem of swap with ZFS (at least as of this release). The partly hung system could be this : http://bugs.opensolaris.org/

[zfs-discuss] Re: Safe to enable write cache?

2006-06-20 Thread Jürgen Keil
> What about ATA disks? > > Currently (at least on x86) the ata driver enables the write cache > unconditionally on each drive and doesn't support the ioctl to flush the > cache > (although the function is already there). DKIOCFLUSHWRITECACHE? It is implemented here, for x86 ata: http://cvs.

[zfs-discuss] nevada_41 and zfs disk partition

2006-06-20 Thread Matthew C Aycock
I just installed build 41 of Nevada on a SunBlade 1500 with 2GB of ram. I wanted to check out zfs since the delay of S10U2 I really could not wait any longer :) I installed it on my system and created a zpool out of an approximately 40GB disk slice. I then wanted to build a version of thunderbi

Re: [zfs-discuss] ufsdump/ufsrestore to migrate ?

2006-06-20 Thread Detlef Drewanz
Thanks to all who reaponded. ufsdump/ufsrestore seems to be a good combination to migrate filesystem from ufs to zfs. I also tried by myself and it worked fast and easy. - be careful with posix-acl, that are existing in ufs Detlef Mark Shellenbaum wrote: grant beattie wrote: On Mon, Jun 19,