Re: [zfs-discuss] utf8only-property

2008-02-28 Thread Richard L. Hamilton
 So, I set utf8only=on and try to create a file with a
 filename that is
 a byte array that can't be decoded to text using
 UTF-8. What's supposed
 to happen? Should fopen(), or whatever syscall
 'touch' uses, fail?
 Should the syscall somehow escape utf8-incompatible
 bytes, or maybe
 replace them with ?s or somesuch? Or should it
 automatically convert the
 filename from the active locale's fs-encoding
 (LC_CTYPE?) to UTF-8?

First, utf8only can AFAIK only be set when a filesystem is created.

Second, use the source, Luke:
http://src.opensolaris.org/source/search?q=defs=refs=z_utf8path=%2Fonnv%2Fonnv-gate%2Fusr%2Fsrc%2Futs%2Fcommon%2Ffs%2Fzfs%2Fzfs_vnops.chist=project=%2Fonnv

Looks to me like lookups, file create, directory create, creating symlinks,
and creating hard links will all fail with error EILSEQ (Illegal byte 
sequence)
if utf8only is enabled and they are presented with a name that is not valid
UTF-8.  Thus, on a filesystem where it is enabled (since creation), no such
names can be created or would ever be there to be found anyway.

So in that case, the system is refusing non UTF-8 compatible byte strings
and there's no need to escape anything.

Further, your last sentence suggests that you might hold the
incorrect idea that the kernel knows or cares what locale an application is
running in: it does not.  Nor indeed does the kernel know about environment
variables at all, except as the third argument passed to execve(2); it
doesn't interpret them, or even validate that they are of the usual
name=value form, they're typically handled pretty much the same as the
command line args, and the only illusion of magic is that with the more
widely used variants of exec that don't explicitly pass the environment,
they internally call execve(2) with the external variable environ as the
last arg, thus passing the environment automatically.

There have been Unix-like OSs that make the environment available to
additional system calls (give or take what's a true system call in the
example I'm thinking of, namely variant links (symlinks with embedded
environment variable references) in the now defunct Apollo Domain/OS),
but AFAIK, that's not the case in those that are part of the historical
Unix source lineage.  (I have no idea off the top of my head whether
or not Linux, or oddballs like OSF/1 might make environment variables
implicitly available to syscalls other than execve(2).)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-28 Thread Paul Van Der Zwan


 On Wed, 27 Feb 2008, Cyril Plisko wrote:
  

 http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performance.pdf
  
   Nov 26, 2008 ??? May I borrow your time machine ? ;-)
  
  Are there any stock prices you would like to know about?  Perhaps you 
 
  are interested in the outcome of the elections?
  

No need for a time machine, the US presidential election outcome is already 
known:

http://www.theonion.com/content/video/diebold_accidentally_leaks

 Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool core dumped

2008-02-28 Thread Piotr Tarnowski
S10U4 SPARC on V890 + patches, StorageTek 6140 + CSM200
Generic_127111-09

The same issue, still don't patched ?

If I set NOINUSE_CHECK=1
pool is created succesfully

Regards
-- 
Piotr (DrFugazi) Tarnowski
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-28 Thread Uwe Dippel
[i]Consider this to be your life's mission.[/i]

Bob, I can do without this.

Richard,

[i]Actually I use several browsers every day. Each
browser has a cache located somewhere in my home
directory and the cache is managed so that it won't
grow very large. With CDP, I would fill my disk in
a week or less, just by caching everything on the
internet that I pass by.[/i]

if you RTFT, you'd find that nobody ever was interested in temp files.

[i]In Uwe's use cases thus far, it seems that he is interested in only the
simple single user style applications, if I'm not mistaken, so he's not
considering the consequences of what it *really* means to have CDP in
the way he wishes.

Uwe - am I close here?[/i]

Nathan, you are not.


Again, there's nothing that I wanted. I was only thinking. And I am a server 
person. Now, if I switch from the /export/home/userfoo/Documents (for Richard, 
who might be happier with UZFS-CDP than with the shots of TimeMachine), to a 
file server, do the arguments still hold, that
1. The application (NFS - sftp) does not know about the state of writing?
2. Obviously nobody sees anything in having access to all versions of a file 
stored there?

In any case, my presentation at that enterprise-security related conference is 
done, the 'history' of backups presented (not exactly my topic). I introduced 
the idea of versioning, and the (possible) advantages of having all versions, 
including the (possible) disadvantages (storage space, mentioned despite my 
doubts). I also pointed out the currently available software for near-CDP, and 
mentioned the discussion we have in here; started for one and only reason (see 
Subject): to confirm if ZFS can be instructed to produce a copy of each version 
of a file, initiated by some event instead of a scheduler.

Somewhat to my surprise, my presentation was a good success, and QA was 
focused on the event-driven backups, what the technical problems were, etc. A 
good handful of people approached me later, being curious and fascinated by the 
idea to replace the backup scheduler with an event-driven creation of the 
versions.

Therefore, to me the case is closed; my presentation done, on the successful 
side.

Thanks to everyone who cared to answer, help, contribute in one way or another,

Uwe
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs. Novell NSS

2008-02-28 Thread Christine Tran
Alan Perry wrote:
 Alan Perry wrote:
 
   I gave a talk on ZFS at a local user group meeting this evening.  
 What I didn't
   know going in was that the meeting was hosted at a Novell consulting 
 shop.  I got
   asked a lot of what does ZFS do that NSS doesn't do questions that 
 I could not
   answer (mostly because I know almost nothing about Novell).
  
   Is there some white paper or something on the topic?
 

I googled for Novell NSS and went straight to the Overview:
http://www.novell.com/documentation/nw65/nss_enu/data/hut0i3h5.html#hut0i3h5

NSS abstracts up to four physical NetWare partitions to make them 
appear as contiguous free space

ZFS can abstract many more than four of anything to make them appear as 
continguous free space.  ZFS can be used on Solaris for SPARC, Solaris 
for X86, and soon to be on the Mac, and anywhere else where people 
decide to port ZFS.

You can choose space from at least four devices of up to 2 TB each to 
create a pool with a maximum pool size of 8 TB. [and more stuff 
describing limitations of NSS right off the bat]

You can make ZFS pool of any nymber of device, the max file size of ZFS 
is in exabytes, max pool size is some ridiculously big number. 
Checksumed, open and free, yada yada.  How about that to start?

CT

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-28 Thread Joe Blount
 A good handful of people approached me later, being
 curious and fascinated by the idea to replace the
 backup scheduler with an event-driven creation of the
 versions.

Uwe,

I'm still struggling to decide if ADM is what you're looking for.  When you 
make comments like the one quoted above, I think ADM is a very practical choice 
for you.  Even if it isn't, the issues discussed here are what lead people to 
an ADM-like solution.

Let me attempt to summarize the dilemmas as I see them, and point out the 
practicality of an ADM-like solution...

* Application agnostic CDP cannot know when the file state is sane.  For true 
CDP this essentially requires preserving the entire write stream, which is an 
enormous burden (in both storage capacity and system bandwidth).  Presumably 
this burden is unacceptable except in niche cases.
Basically: it works, but it hurts.

* Application aware/driven CDP solves the file sanity challenge by being 
explicitly told by the app.  But this will have an inherently limited market 
because it relies on application support.  
Basically: it works, but requires coordination rarely found outside monopoly 
owned stacks.

* Traditional backup leaves exposure windows and doesn't address the file 
sanity issue (unless there is a backup window, or specific assumptions)
Basically: its easy because it overlooks so much.

Unless you have a large budget, some compromises need to be made.  IMO, ADM is 
a reasonable compromise for many.

With ADM, backing up files is typically initiated at a specified time after 
file modification.  For this discussion, think of it as: “make a new backup 
anytime file data is stable for X amount of time”.  There can be many policies 
for files with different usage patterns in a file system.  These should be 
tailored to business value, anticipated modification frequency, etc.  

Here's a few examples of policies one might set up:
- Never backup files with /firefox/cache/ in the path.
- Backup (to disk) the CEO's Star-Office docs when they're stable for 1 minute.
- Backup (to disk) other user's Star-Office docs when they're stable for 5 
minutes.
- Backup (to disk) all other files when stable for 5 hours.
- Make a second backup (to tape) of all files when they're stable for 24 hours.

Note how the file data stability time can ignorantly handle the file 
consistency issue.  Pauses in file modification should generally occur when the 
data is consistent.  If not, we'll back it up again anyway after the next round 
of modifications.

The overhead introduced by ADM is less than  you might imagine...  ADM/DMAPI 
can enable specific event types on a per-filesystem-object basis, so the 
versatility of the policies above does not come at the expense of excess 
chatter.  ADM's evaluation of a file is triggered by a change or close event.  
So we look when there is reason to be believe we have work to do.

ADM has several benefits relevant to this discussion:
- Automated management of the thousands/millions of backups.  How many to keep, 
should they be migrated from disk to tape, etc.
- Automated reclaiming  reuse of media used for backups.
- No burden of maintaining entire write stream
- No requirement for application support
- For most file access patterns, we should make good guesses on when the data 
is consistent.

If you're willing to give up the “last mile” requirement of CDP ADM is a fairly 
cheap way to give you a lot of what you want.  Thoughts?

(in ADM we use the term “archive” but here I'm using the term “backup” since 
that's what you're using)

-Joe
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-28 Thread Bob Friesenhahn
On Thu, 28 Feb 2008, Uwe Dippel wrote:

 1. The application (NFS - sftp) does not know about the state of writing?

Sometimes applications know about the state of writing and sometimes 
they do not.  Sometimes they don't even know they are writing.

 2. Obviously nobody sees anything in having access to all versions of a file 
 stored there?

First it is necessary to determine what version means when it comes 
to a file.

At the application level, the system presents a different view than 
what is actually stored on disk since the system uses several levels 
of write caching to improve performance.  The only time that these 
should necessarily be the same is if the application uses a file 
descriptor to access the file (no memory mapping) and invokes fsync(). 
If memory mapping is used, the equivalent is msync() with the MS_SYNC 
option.  Using fsync() or msync(MS_SYNC) blocks the application until 
the I/O is done.

If a file is updated via memory mapping, then the data sent to the 
underlying file is based on the system's virtual memory system so the 
actually data sent to disk may not be coherent at all.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-28 Thread Nicolas Williams
On Thu, Feb 28, 2008 at 07:55:45AM -0800, Joe Blount wrote:
 * Application aware/driven CDP solves the file sanity challenge by
 being explicitly told by the app.  But this will have an inherently
 limited market because it relies on application support.  Basically:
 it works, but requires coordination rarely found outside monopoly
 owned stacks.

I challenge the assumption that this has an inherently limited market
-- if you get momentum for something like this then who knows, it might
take off.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-28 Thread Bill Sommerfeld

On Wed, 2008-02-27 at 13:43 -0500, Kyle McDonald wrote:
 How was it MVFS could do this without any changes to the shells or any 
 other programs?
 
 I ClearCase could  'grep FOO /dir1/dir2/file@@/main/*' to see which 
 version of 'file' added FOO.
 (I think @@ was the special hidden key. It might have been something 
 else though.)

When I last used clearcase (on the order of 12 years ago) foo@@/ only
worked within clearcase mvfs filesystems.

It behaved as if the filesystem created a foo@@ virtual directory for
each real foo directory entry, but then filtered those names out of
directory listings.

Doing the same as an alternate view on snapshot space would be a
simple matter of programming within ZFS, though the magic token/suffix
to get you into version/snapshot space would likely not be POSIX
compliant..

- Bill




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] path-name encodings

2008-02-28 Thread Marcus Sundman
Bart Smaalders [EMAIL PROTECTED] wrote:
  I'm unable to find more info about this. E.g., what does reject
  file names mean in practice? E.g., if a program tries to create a
  file using an utf8-incompatible filename, what happens? Does the
  fopen() fail? Would this normally be a problem? E.g., do tar and
  similar programs convert utf8-incompatible filenames to utf8 upon
  extraction if my locale (or wherever the fs encoding is taken from)
  is set to use utf-8? If they don't, then what happens with archives
  containing utf8-incompatible filenames?
 
 
 Note that the normal ZFS behavior is exactly what you'd expect: you
 get the filenames you wanted; the same ones back you put in.

OK, thanks. I still haven't got any answer to my original question,
though. I.e., is there some way to know what text the filename is, or
do I have to make a more or less wild guess what encoding the program
that created the file used?

OK, if I use utf8only then I know that all filenames can be interpreted
as UTF-8. However, that's completely unacceptable for me, since I'd
much rather have an important file with an incomprehensible filename
than not have that important file at all. Also, what about non-UTF-8
encodings? E.g., is it possible to know whether 0xe4 is ä (as in
iso-8859-1) or ф (as in iso-8859-5)?

 The trick is that in order to support such things as
 casesensitivity=false for CIFS, the OS needs to know what characters
 are uppercase vs lowercase, which means it needs to know about
 encodings, and reject codepoints which cannot be classified as
 uppercase vs lowercase.

I don't see why the OS would care about that. Isn't that the job of the
CIFS daemon? As a matter of fact I don't see why the OS would need to
know how to decode any filename-bytes to text. However, I firmly
believe that user applications should have that opportunity. If the
encoding of filenames is not known (explicitly or implicitly) then
applications don't have that opportunity.


- Marcus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs. Novell NSS

2008-02-28 Thread Tim
On 2/28/08, Christine Tran [EMAIL PROTECTED] wrote:

 Alan Perry wrote:
  Alan Perry wrote:
 
I gave a talk on ZFS at a local user group meeting this evening.
  What I didn't
know going in was that the meeting was hosted at a Novell consulting
  shop.  I got
asked a lot of what does ZFS do that NSS doesn't do questions that
  I could not
answer (mostly because I know almost nothing about Novell).
   
Is there some white paper or something on the topic?
 

 I googled for Novell NSS and went straight to the Overview:

 http://www.novell.com/documentation/nw65/nss_enu/data/hut0i3h5.html#hut0i3h5

 NSS abstracts up to four physical NetWare partitions to make them
 appear as contiguous free space

 ZFS can abstract many more than four of anything to make them appear as
 continguous free space.  ZFS can be used on Solaris for SPARC, Solaris
 for X86, and soon to be on the Mac, and anywhere else where people
 decide to port ZFS.

 You can choose space from at least four devices of up to 2 TB each to
 create a pool with a maximum pool size of 8 TB. [and more stuff
 describing limitations of NSS right off the bat]

 You can make ZFS pool of any nymber of device, the max file size of ZFS
 is in exabytes, max pool size is some ridiculously big number.
 Checksumed, open and free, yada yada.  How about that to start?

 CT

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





Don't forget, ZFS is open source, and can be ported to any other number of
platforms as well.  It's also currently supported on FreeBSD 7.0, and is
basically production ready on that platform.

The open source is HUGE in my mind, you aren't tied to Solaris.  Granted,
that is where the main development is taking place right now, but if Sun
were to fold up shop, or kill off solaris *cough*netware*cough*, zfs isn't
going anywhere, and your data should be portable.

I'm going to be blunt, and probably will rile up a few trolls if there are
any on this mailing list:
If you're talking to anyone still on netware, they're a netware zealot, and
nothing you can say is going to change that.
If they haven't found a reason to throw netware under the bus yet, they
aren't going to.  No reasonable argument as to why ZFS is superior to NSS
will be heard, there will always be some caveat (ITS NOT SUPPORTED BY
NOVELL!!!11), as to why it's just not a good enough reason to switch.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-28 Thread Kyle McDonald
Bill Sommerfeld wrote:
 On Wed, 2008-02-27 at 13:43 -0500, Kyle McDonald wrote:
   
 How was it MVFS could do this without any changes to the shells or any 
 other programs?

 I ClearCase could  'grep FOO /dir1/dir2/file@@/main/*' to see which 
 version of 'file' added FOO.
 (I think @@ was the special hidden key. It might have been something 
 else though.)
 

 When I last used clearcase (on the order of 12 years ago) foo@@/ only
 worked within clearcase mvfs filesystems.

 It behaved as if the filesystem created a foo@@ virtual directory for
 each real foo directory entry, but then filtered those names out of
 directory listings.

 Doing the same as an alternate view on snapshot space would be a
 simple matter of programming within ZFS, though the magic token/suffix
 to get you into version/snapshot space would likely not be POSIX
 compliant..
   
Ahh.

I suspected it should be 'possible' to code it into ZFS.

The reason it's been left to runat instead seems to be POSIX compliance 
then?

Maybe a FS level parameter could turn that processing on or off, and 
even allow the admin to redefine the '@@' to anything they wish? (VMS 
fans might like to set it to ';' I suppose, but even then it wouldn't be 
the same. ;) )

   -Kyle


   - Bill




   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cause for data corruption?

2008-02-28 Thread MC
 So I scrubbed the whole pool and it found a lot more corrupted files.

My condolences :)  

General questions and comments about ZFS and data corruption:

I thought RAIDZ would correct data errors automatically with the parity data.  
How wrong am I on that?  Perhaps a parity correction was already tried, and 
there was too much corruption to be successful, implying a very significant 
amount of data corruption?

Assuming the errors are being generated by bad hardware somewhere between the 
disk and the CPU (inclusively), how could ZFS be configured to handle these 
errors automatically?  Set data copies to equal 2, I think.  Anything else?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-28 Thread Mark Shellenbaum
Kyle McDonald wrote:
 Bill Sommerfeld wrote:
 On Wed, 2008-02-27 at 13:43 -0500, Kyle McDonald wrote:
   
 How was it MVFS could do this without any changes to the shells or any 
 other programs?

 I ClearCase could  'grep FOO /dir1/dir2/file@@/main/*' to see which 
 version of 'file' added FOO.
 (I think @@ was the special hidden key. It might have been something 
 else though.)
 
 When I last used clearcase (on the order of 12 years ago) foo@@/ only
 worked within clearcase mvfs filesystems.

 It behaved as if the filesystem created a foo@@ virtual directory for
 each real foo directory entry, but then filtered those names out of
 directory listings.

 Doing the same as an alternate view on snapshot space would be a
 simple matter of programming within ZFS, though the magic token/suffix
 to get you into version/snapshot space would likely not be POSIX
 compliant..
   
 Ahh.
 
 I suspected it should be 'possible' to code it into ZFS.
 
 The reason it's been left to runat instead seems to be POSIX compliance 
 then?

Yes, we have runat for POSIX compliance.

An earlier prototype of Solaris extended attributes utilized a /@/ 
syntax to enter enter xattr space.   For example:

/data/file1/@/
/data/file1/@/attr.1
...
or
/data/dir1/@/

A readdir of /data/dir1 wouldn't show the @ directory, but you could 
always request to enter it.

This violated posix in a couple of ways.  One we took away the @ 
filename and two you can't have a directory on a file.

It was a really nice model, and I still kind of wish we could have 
integrated it that way.

   -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-28 Thread Kyle McDonald
Mark Shellenbaum wrote:
 Kyle McDonald wrote:
 Bill Sommerfeld wrote:
 On Wed, 2008-02-27 at 13:43 -0500, Kyle McDonald wrote:
  
 How was it MVFS could do this without any changes to the shells or 
 any other programs?

 I ClearCase could  'grep FOO /dir1/dir2/file@@/main/*' to see which 
 version of 'file' added FOO.
 (I think @@ was the special hidden key. It might have been 
 something else though.)
 
 When I last used clearcase (on the order of 12 years ago) foo@@/ only
 worked within clearcase mvfs filesystems.

 It behaved as if the filesystem created a foo@@ virtual directory for
 each real foo directory entry, but then filtered those names out of
 directory listings.

 Doing the same as an alternate view on snapshot space would be a
 simple matter of programming within ZFS, though the magic token/suffix
 to get you into version/snapshot space would likely not be POSIX
 compliant..
   
 Ahh.

 I suspected it should be 'possible' to code it into ZFS.

 The reason it's been left to runat instead seems to be POSIX 
 compliance then?

 Yes, we have runat for POSIX compliance.

 An earlier prototype of Solaris extended attributes utilized a /@/ 
 syntax to enter enter xattr space.   For example:

 /data/file1/@/
 /data/file1/@/attr.1
 ...
 or
 /data/dir1/@/

 A readdir of /data/dir1 wouldn't show the @ directory, but you could 
 always request to enter it.

 This violated posix in a couple of ways.  One we took away the @ 
 filename and two you can't have a directory on a file.

 It was a really nice model, and I still kind of wish we could have 
 integrated it that way.

Why not resurrect the behavior, but default it to off, and leave it to 
the user to enable with a ZFS filesystem or pool attribute?

  -Kyle


   -Mark

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-28 Thread Richard Elling
Bill Sommerfeld wrote:

 Doing the same as an alternate view on snapshot space would be a
 simple matter of programming within ZFS, though the magic token/suffix
 to get you into version/snapshot space would likely not be POSIX
 compliant..

   

We already have a POSIX compliant file system for ZFS, implemented
by the ZFS POSIX Layer (ZPL).  We also have ZVols which don't use
the ZPL.  Perhaps some enterprising soul could add another file system
type to ZFS :-)  Step right up!  Invent something cool!  Be the life of the
party!  Amaze your friends!
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-28 Thread Darren J Moffat
Bob Friesenhahn wrote:
 Is it possible to create a ZFS pool using a backing file created in 
 xattr space?

Why would you want to do that ?

I tried but could get it to work with the CLI.  However it may be 
possible via the (private) libzfs function call interface.

da64-x4500b-gmp03# cd /tmp
da64-x4500b-gmp03# runat
da64-x4500b-gmp03# touch silly
da64-x4500b-gmp03# runat silly mkfile 64m pool_file_1
da64-x4500b-gmp03# runat silly zpool create silly `pwd`/pool_file_1
cannot open '/tmp/pool_file_1': No such file or directory

Which is correct because it isn't in /tmp

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Jonathan Loran

Quick question:

If I create a ZFS mirrored pool, will the read performance get a boost?  
In other words, will the data/parity be read round robin between the 
disks, or do both mirrored sets of data and parity get read off of both 
disks?  The latter case would have a CPU expense, so I would think you 
would see a slow down.

Thanks,

Jon

  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs. Novell NSS

2008-02-28 Thread Tim
On 2/28/08, Alan Perry [EMAIL PROTECTED] wrote:
 Tim wrote:

  Don't forget, ZFS is open source, and can be ported to any other
  number of platforms as well.  It's also currently supported on FreeBSD
  7.0, and is basically production ready on that platform.
 
  The open source is HUGE in my mind, you aren't tied to Solaris.
  Granted, that is where the main development is taking place right now,
  but if Sun were to fold up shop, or kill off solaris
  *cough*netware*cough*, zfs isn't going anywhere, and your data should
  be portable.
 
  I'm going to be blunt, and probably will rile up a few trolls if there
  are any on this mailing list:
  If you're talking to anyone still on netware, they're a netware
  zealot, and nothing you can say is going to change that.
  If they haven't found a reason to throw netware under the bus yet,
  they aren't going to.  No reasonable argument as to why ZFS is
  superior to NSS will be heard, there will always be some caveat (ITS
  NOT SUPPORTED BY NOVELL!!!11), as to why it's just not a good enough
  reason to switch.

 Out of the Novell-types at the talk, one was a Novell zealot and the
 rest were just folks who make a living supporting Novell customers.
 Also, NSS was apparently been ported to Linux.

 alan


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread ian
Jonathan Loran writes: 

 
 Quick question: 
 
 If I create a ZFS mirrored pool, will the read performance get a boost? 

Yes.  I use a stripe of mirrors to get better read and write performance. 

Ian. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Roch Bourbonnais

Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :


 Quick question:

 If I create a ZFS mirrored pool, will the read performance get a  
 boost?
 In other words, will the data/parity be read round robin between the
 disks, or do both mirrored sets of data and parity get read off of  
 both
 disks?  The latter case would have a CPU expense, so I would think you
 would see a slow down.


2 disks mirrored together can read data faster than a single disk.
So  to service a read only one side of the mirror is read.

Raid-Z parity is only read in the presence of checksum errors.

 Thanks,

 Jon




 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Jonathan Loran


Roch Bourbonnais wrote:

 Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :


 Quick question:

 If I create a ZFS mirrored pool, will the read performance get a boost?
 In other words, will the data/parity be read round robin between the
 disks, or do both mirrored sets of data and parity get read off of both
 disks?  The latter case would have a CPU expense, so I would think you
 would see a slow down.


 2 disks mirrored together can read data faster than a single disk.
 So  to service a read only one side of the mirror is read.

 Raid-Z parity is only read in the presence of checksum errors.
That's what I suspected, but I'm glad to get the final word on this.  BTW, I 
guess I should have said checksums instead of parity.  My bad.

Thanks,

Jon

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs. Novell NSS

2008-02-28 Thread Tim
On 2/28/08, Alan Perry [EMAIL PROTECTED] wrote:

 Tim wrote:

  Don't forget, ZFS is open source, and can be ported to any other
  number of platforms as well.  It's also currently supported on FreeBSD
  7.0, and is basically production ready on that platform.
 
  The open source is HUGE in my mind, you aren't tied to Solaris.
  Granted, that is where the main development is taking place right now,
  but if Sun were to fold up shop, or kill off solaris
  *cough*netware*cough*, zfs isn't going anywhere, and your data should
  be portable.
 
  I'm going to be blunt, and probably will rile up a few trolls if there
  are any on this mailing list:
  If you're talking to anyone still on netware, they're a netware
  zealot, and nothing you can say is going to change that.
  If they haven't found a reason to throw netware under the bus yet,
  they aren't going to.  No reasonable argument as to why ZFS is
  superior to NSS will be heard, there will always be some caveat (ITS
  NOT SUPPORTED BY NOVELL!!!11), as to why it's just not a good enough
  reason to switch.


 Out of the Novell-types at the talk, one was a Novell zealot and the
 rest were just folks who make a living supporting Novell customers.
 Also, NSS was apparently been ported to Linux.


 alan



Glad to hear that.  I've NEVER understood people being close minded about
technology.  My experiences have been less than stellar with Netware folks.


My belief has always been the sooner you realize technology is a tool and to
use it as such, the sooner you will learn to use it efficiently.
The greatest hammer in the world will be inferior to a drill when driving a
screw :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Roch Bourbonnais

Le 28 févr. 08 à 21:00, Jonathan Loran a écrit :



 Roch Bourbonnais wrote:

 Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :


 Quick question:

 If I create a ZFS mirrored pool, will the read performance get a  
 boost?
 In other words, will the data/parity be read round robin between the
 disks, or do both mirrored sets of data and parity get read off of  
 both
 disks?  The latter case would have a CPU expense, so I would think  
 you
 would see a slow down.


 2 disks mirrored together can read data faster than a single disk.
 So  to service a read only one side of the mirror is read.

 Raid-Z parity is only read in the presence of checksum errors.
 That's what I suspected, but I'm glad to get the final word on  
 this.  BTW, I guess I should have said checksums instead of parity.   
 My bad.


OK. The checksum is a different story and is stored within the  
metadata block pointing to the data block.
So given that to reach the data block we've already had to read the  
metadata block, checskum validation is never the
source of an I/O.

 Thanks,

 Jon

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Jonathan Loran


Roch Bourbonnais wrote:

 Le 28 févr. 08 à 21:00, Jonathan Loran a écrit :



 Roch Bourbonnais wrote:

 Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :


 Quick question:

 If I create a ZFS mirrored pool, will the read performance get a 
 boost?
 In other words, will the data/parity be read round robin between the
 disks, or do both mirrored sets of data and parity get read off of 
 both
 disks?  The latter case would have a CPU expense, so I would think you
 would see a slow down.


 2 disks mirrored together can read data faster than a single disk.
 So  to service a read only one side of the mirror is read.

 Raid-Z parity is only read in the presence of checksum errors.
 That's what I suspected, but I'm glad to get the final word on this.  
 BTW, I guess I should have said checksums instead of parity.  My bad.


 OK. The checksum is a different story and is stored within the 
 metadata block pointing to the data block.
 So given that to reach the data block we've already had to read the 
 metadata block, checskum validation is never the
 source of an I/O.
I really need to read those ZFS internals docs (in all my spare time ;) 
Thanks,

Jon
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs. Novell NSS

2008-02-28 Thread Richard Elling
Tim wrote:

 The greatest hammer in the world will be inferior to a drill when 
 driving a screw :)


The greatest hammer in the world is a rotary hammer, and it
works quite well for driving screws or digging through degenerate
granite ;-)  Need a better analogy.
Here's what I use (quite often) on the ranch:
http://www.hitachi-koki.com/powertools/products/hammer/dh40mr/dh40mr.html
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs. Novell NSS

2008-02-28 Thread Boyd Adamson
Richard Elling [EMAIL PROTECTED] writes:
 Tim wrote:

 The greatest hammer in the world will be inferior to a drill when 
 driving a screw :)


 The greatest hammer in the world is a rotary hammer, and it
 works quite well for driving screws or digging through degenerate
 granite ;-)  Need a better analogy.
 Here's what I use (quite often) on the ranch:
 http://www.hitachi-koki.com/powertools/products/hammer/dh40mr/dh40mr.html

Hasn't the greatest hammer in the world lost the ability to drive
nails? 

I'll have to start belting them in with the handle of a screwdriver...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs. Novell NSS

2008-02-28 Thread Nathan Kroenert - Server ESG
Hm -

Based on this detail from the page:

Change lever for switching between Rotation
   + Hammering , Neutral and Hammering only

I'd hope it could still hammer... Though I'd suspect the size of nails 
it would hammer would be somewhat limited... ;)

Nathan.

Boyd Adamson wrote:
 Richard Elling [EMAIL PROTECTED] writes:
 Tim wrote:
 The greatest hammer in the world will be inferior to a drill when 
 driving a screw :)

 The greatest hammer in the world is a rotary hammer, and it
 works quite well for driving screws or digging through degenerate
 granite ;-)  Need a better analogy.
 Here's what I use (quite often) on the ranch:
 http://www.hitachi-koki.com/powertools/products/hammer/dh40mr/dh40mr.html
 
 Hasn't the greatest hammer in the world lost the ability to drive
 nails? 
 
 I'll have to start belting them in with the handle of a screwdriver...
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Permanently removing vdevs from a pool

2008-02-28 Thread Jack Patteeuw
I found this feature to be incredibly useful when managing a Digital Unix 
system with AdsFS.  Migrating to a larger disk (or larger hardware RAID set) 
was a simple add, remove and wait for the filesystem to clean up.  This was 
done with multiple users online.  Good Stuff !

Keep up the good work !
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs. Novell NSS

2008-02-28 Thread Richard Elling
Boyd Adamson wrote:
 Richard Elling [EMAIL PROTECTED] writes:
   
 Tim wrote:
 
 The greatest hammer in the world will be inferior to a drill when 
 driving a screw :)

   
 The greatest hammer in the world is a rotary hammer, and it
 works quite well for driving screws or digging through degenerate
 granite ;-)  Need a better analogy.
 Here's what I use (quite often) on the ranch:
 http://www.hitachi-koki.com/powertools/products/hammer/dh40mr/dh40mr.html
 

 Hasn't the greatest hammer in the world lost the ability to drive
 nails? 
   

I use guns to drive nails :-)
Maybe you guys have been so busy playing with computers that you've
missed a complete revolution in the productivity tools for construction?
If you want, I'm giving free fence building lessons next week, you can
catch up on all of the latest technology :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs over zfs

2008-02-28 Thread Marion Hakanson
[EMAIL PROTECTED] said:
 i am a little new to zfs so please excuse my ignorance.  i have a poweredge
 2950 running Nevada B82 with an Apple Xraid attached over a fiber hba.  they
 are formatted to JBOD with the pool configured as follows: 
 . . .
 i have a filesystem (tpool4/seplog) shared over nfs.  creating files locally
 seems to be fine but writing files over nfs seem to be extremely slow  on one
 of the clients(os x) it reports over 3hours to copy a 500MB file.   also
 during the copy when i issue a zpool iostat -v 5 the response time increases
 for the command. i have also noticed that none of the led's on the drives
 flicker. 

If you haven't already, tell the Xraid to ignore cache-flush requests
from the host:
  http://www.opensolaris.org/jive/thread.jspa?threadID=11641

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] path-name encodings

2008-02-28 Thread Anton B. Rang
 OK, thanks. I still haven't got any answer to my original question,
 though. I.e., is there some way to know what text the
 filename is, or do I have to make a more or less wild guess what
 encoding the program that created the file used?

You have to guess.  As far as I know, Apple's HFS (and HFS+) is the only file 
system which stores the encoding along with the filename.  NFS doesn't provide 
a mechanism to send the encoding with the filename; I don't believe that CIFS 
does, either.

If you're writing the application, you could store the encoding as an extended 
attribute of the file. This would be useful, for instance, for an AFP server.

  The trick is that in order to support such things as
  casesensitivity=false for CIFS, the OS needs to know what characters
  are uppercase vs lowercase, which means it needs to know about
  encodings, and reject codepoints which cannot be classified as
  uppercase vs lowercase.
 
 I don't see why the OS would care about that. Isn't that the job of the
 CIFS daemon?

The CIFS daemon can do it, but it would require that the daemon cache the whole 
directory in memory (at least, to get reasonable efficiency). This doesn't work 
so well for large directories. If you leave it up to the CIFS daemon, you also 
wind up with problems if you have a single sharepoint shared between local 
users, NFS  CIFS -- the NFS client can create two files named a and A, but 
the CIFS client can only see one of those.

 As a matter of fact I don't see why the OS would need to
 know how to decode any filename-bytes to text.
 However, I firmly believe that user applications should have that
 opportunity. If the encoding of filenames is not known (explicitly or
 implicitly) then applications don't have that opportunity.

Yes -- that's why Apple includes an encoding byte in both HFS and HFS+.  (In 
HFS+, filenames are normalized to 16-bit Unicode, but the encoding is still 
useful in choosing how to recompose the characters, and in providing hints for 
applications which prefer the names in some 8-bit encoding.)

-- Anton
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cause for data corruption?

2008-02-28 Thread Sandro
Thanks for your reassuring post, loomy :)

I'm pretty sure the reason for all this is some bad hardware..
But I can't get VTS to work, looks like its not supported for this kind of 
hardware.

And in order to run some other stresstest software or something I would have to 
connect monitor, keyboard and dvd rom.. which I'm just so sick of doing :)

Hopefully I can motivate myself on the weekend .. I'll keep you all here 
updated when I find something.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss