[zfs-discuss] File created with CIFS is not immediately deletable on local file system

2010-02-21 Thread Peter Radig
Box running osol_133 with smb/server enabled. I create a file on a Windows box 
that has a remote ZFS fs mounted. I go to the Solaris box and try to remove the 
file and get permission denied for up 30 sec. Than it works. A sync 
immediately before the rm seems to speed things up and rm is successful 
immediately afterwards, too. The permissions of the file don't change in 
between and look as follow:
--+  1 petersysprog0 Feb 21 11:37 NEWFILE3.txt
 user:peter:rwxpdDaARWcCos:---:allow
   group:2147483648:rwxpdDaARWcCos:---:allow

Is this a bug or a feature?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] File created with CIFS is not immediately deletable on local file system

2010-02-21 Thread Thomas Burgess
i may be wrong but i think it would depend on how you have your ACL's set up
and whether or not ACL inhereat is on
On Sun, Feb 21, 2010 at 5:46 AM, Peter Radig pe...@radig.de wrote:

 Box running osol_133 with smb/server enabled. I create a file on a Windows
 box that has a remote ZFS fs mounted. I go to the Solaris box and try to
 remove the file and get permission denied for up 30 sec. Than it works. A
 sync immediately before the rm seems to speed things up and rm is
 successful immediately afterwards, too. The permissions of the file don't
 change in between and look as follow:
 --+  1 petersysprog0 Feb 21 11:37 NEWFILE3.txt
 user:peter:rwxpdDaARWcCos:---:allow
   group:2147483648:rwxpdDaARWcCos:---:allow

 Is this a bug or a feature?
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Observations about compressability of metadata L2ARC

2010-02-21 Thread Tomas Ögren
Hello.

I got an idea.. How about creating an ramdisk, making a pool out of it,
then making compressed zvols and add those as l2arc.. Instant compressed
arc ;)

So I did some tests with secondarycache=metadata...

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
ftp 5.07T  1.78T198 17  11.3M  1.51M
  raidz21.72T   571G 58  5  3.78M   514K
...
  raidz21.64T   656G 75  6  3.78M   524K
...
  raidz21.70T   592G 64  5  3.74M   512K
...
cache   -  -  -  -  -  -
  /dev/zvol/dsk/ramcache/ramvol  84.4M  7.62M  4 17  45.4K 233K
  /dev/zvol/dsk/ramcache/ramvol2  84.3M  7.71M  4 17  41.5K 233K
  /dev/zvol/dsk/ramcache/ramvol384M 8M  4 18  42.0K 236K
  /dev/zvol/dsk/ramcache/ramvol4  84.8M  7.25M  3 17  39.1K 225K
  /dev/zvol/dsk/ramcache/ramvol5  84.9M  7.08M  3 14  38.0K 193K

NAME  RATIO  COMPRESS
ramcache/ramvol   1.00x   off
ramcache/ramvol2  4.27x  lzjb
ramcache/ramvol3  6.12xgzip-1
ramcache/ramvol4  6.77x  gzip
ramcache/ramvol5  6.82xgzip-9

This was after 'find /ftp' had been running for about 1h, along with all
the background noise of its regular nfs serving tasks.

I took an image of the uncompressed one (ramvol) and ran that through
regular gzip and got 12-14x compression, probably due to smaller block
size (default 8k) in the zvols.. So I tried with both 8k and 64k..

After not running that long (but at least filled), I got:

NAME  RATIO  COMPRESS  VOLBLOCK
ramcache/ramvol   1.00x   off8K
ramcache/ramvol2  5.57x  lzjb8K
ramcache/ramvol3  7.56x  lzjb   64K
ramcache/ramvol4  7.35xgzip-18K
ramcache/ramvol5  11.68xgzip-1   64K


Not sure how to measure the cpu usage of the various compression levels
for (de)compressing this data..  It does show that having metadata in
ram compressed could be a big win though, if you have cpu cycles to
spare..

Thoughts?


/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se - 070-5858487
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] l2arc current usage (population size)

2010-02-21 Thread Felix Buenemann

Am 20.02.10 03:22, schrieb Tomas Ögren:

On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes:

How do you tell how much of your l2arc is populated? I've been looking for a 
while now, can't seem to find it.

Must be easy, as this blog entry shows it over time:

http://blogs.sun.com/brendan/entry/l2arc_screenshots

And follow up, can you tell how much of each data set is in the arc or l2arc?


kstat -m zfs
(p, c, l2arc_size)

arc_stat.pl is good, but doesn't show l2arc..


zpool iostat -v poolname would also do the trick for l2arc.



/Tomas


- Felix


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] l2arc current usage (population size)

2010-02-21 Thread Tomas Ögren
On 21 February, 2010 - Felix Buenemann sent me these 0,7K bytes:

 Am 20.02.10 03:22, schrieb Tomas Ögren:
 On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes:
 How do you tell how much of your l2arc is populated? I've been looking for 
 a while now, can't seem to find it.

 Must be easy, as this blog entry shows it over time:

 http://blogs.sun.com/brendan/entry/l2arc_screenshots

 And follow up, can you tell how much of each data set is in the arc or 
 l2arc?

 kstat -m zfs
 (p, c, l2arc_size)

 arc_stat.pl is good, but doesn't show l2arc..

 zpool iostat -v poolname would also do the trick for l2arc.

No, it will show how much of the disk has been visited (dirty blocks)
but not how much it occupies right now. At least very obvious difference
if you add a zvol as cache..

If it had supported TRIM or similar, they would probably be about the
same though.

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] l2arc current usage (population size)

2010-02-21 Thread Richard Elling
On Feb 21, 2010, at 9:18 AM, Tomas Ögren wrote:

 On 21 February, 2010 - Felix Buenemann sent me these 0,7K bytes:
 
 Am 20.02.10 03:22, schrieb Tomas Ögren:
 On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes:
 How do you tell how much of your l2arc is populated? I've been looking for 
 a while now, can't seem to find it.
 
 Must be easy, as this blog entry shows it over time:
 
 http://blogs.sun.com/brendan/entry/l2arc_screenshots
 
 And follow up, can you tell how much of each data set is in the arc or 
 l2arc?
 
 kstat -m zfs
 (p, c, l2arc_size)
 
 arc_stat.pl is good, but doesn't show l2arc..
 
 zpool iostat -v poolname would also do the trick for l2arc.
 
 No, it will show how much of the disk has been visited (dirty blocks)
 but not how much it occupies right now. At least very obvious difference
 if you add a zvol as cache..
 
 If it had supported TRIM or similar, they would probably be about the
 same though.

Don't confuse the ZIL with L2ARC.  TRIM will do little for L2ARC devices.
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 15-17, 2010)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Observations about compressability of metadata L2ARC

2010-02-21 Thread Andrey Kuzmin
I don't see why this couldn't be extended beyond metadata (+1 for the
idea): if zvol is compressed, ARC/L2ARC could store compressed data.
The gain is apparent: if user has compression enabled for the volume,
he/she expects volume's data to be compressable at good ratio,
yielding significant reduction of ARC memory footprint/L2ARC usable
capacity boost.

Regards,
Andrey

On Sun, Feb 21, 2010 at 7:24 PM, Tomas Ögren st...@acc.umu.se wrote:
 Hello.

 I got an idea.. How about creating an ramdisk, making a pool out of it,
 then making compressed zvols and add those as l2arc.. Instant compressed
 arc ;)

 So I did some tests with secondarycache=metadata...

               capacity     operations    bandwidth
 pool         used  avail   read  write   read  write
 --  -  -  -  -  -  -
 ftp         5.07T  1.78T    198     17  11.3M  1.51M
  raidz2    1.72T   571G     58      5  3.78M   514K
 ...
  raidz2    1.64T   656G     75      6  3.78M   524K
 ...
  raidz2    1.70T   592G     64      5  3.74M   512K
 ...
 cache           -      -      -      -      -      -
  /dev/zvol/dsk/ramcache/ramvol  84.4M  7.62M      4     17  45.4K 233K
  /dev/zvol/dsk/ramcache/ramvol2  84.3M  7.71M      4     17  41.5K 233K
  /dev/zvol/dsk/ramcache/ramvol3    84M     8M      4     18  42.0K 236K
  /dev/zvol/dsk/ramcache/ramvol4  84.8M  7.25M      3     17  39.1K 225K
  /dev/zvol/dsk/ramcache/ramvol5  84.9M  7.08M      3     14  38.0K 193K

 NAME              RATIO  COMPRESS
 ramcache/ramvol   1.00x       off
 ramcache/ramvol2  4.27x      lzjb
 ramcache/ramvol3  6.12x    gzip-1
 ramcache/ramvol4  6.77x      gzip
 ramcache/ramvol5  6.82x    gzip-9

 This was after 'find /ftp' had been running for about 1h, along with all
 the background noise of its regular nfs serving tasks.

 I took an image of the uncompressed one (ramvol) and ran that through
 regular gzip and got 12-14x compression, probably due to smaller block
 size (default 8k) in the zvols.. So I tried with both 8k and 64k..

 After not running that long (but at least filled), I got:

 NAME              RATIO  COMPRESS  VOLBLOCK
 ramcache/ramvol   1.00x       off        8K
 ramcache/ramvol2  5.57x      lzjb        8K
 ramcache/ramvol3  7.56x      lzjb       64K
 ramcache/ramvol4  7.35x    gzip-1        8K
 ramcache/ramvol5  11.68x    gzip-1       64K


 Not sure how to measure the cpu usage of the various compression levels
 for (de)compressing this data..  It does show that having metadata in
 ram compressed could be a big win though, if you have cpu cycles to
 spare..

 Thoughts?


 /Tomas
 --
 Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
 |- Student at Computing Science, University of Umeå
 `- Sysadmin at {cs,acc}.umu.se - 070-5858487
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with corrupted pool

2010-02-21 Thread Ethan
On Thu, Feb 18, 2010 at 16:03, Ethan notet...@gmail.com wrote:

 On Thu, Feb 18, 2010 at 15:31, Daniel Carosone d...@geek.com.au wrote:

 On Thu, Feb 18, 2010 at 12:42:58PM -0500, Ethan wrote:
  On Thu, Feb 18, 2010 at 04:14, Daniel Carosone d...@geek.com.au wrote:
  Although I do notice that right now, it imports just fine using the p0
  devices using just `zpool import q`, no longer having to use import -d
 with
  the directory of symlinks to p0 devices. I guess this has to do with
 having
  repaired the labels and such? Or whatever it's repaired having
 successfully
  imported and scrubbed.

 It's the zpool.cache file at work, storing extra copies of labels with
 corrected device paths.  For curiosity's sake, what happens when you
 remove (rename) your dir with the symlinks?


 I'll let you know when the current scrub finishes.



  After the scrub finished, this is the state of my pool:
  /export/home/ethan/qdsk/c9t1d0p0  DEGRADED 4 060
  too many errors

 Ick.  Note that there are device errors as well as content (checksum)
 errors, which means it's can't only be correctly-copied damage from
 your orignal pool that was having problems.

 zpool clear and rescrub, for starters, and see if they continue.


 Doing that now.



 I suggest also:
  - carefully checking and reseating cables, etc
  - taking backups now of anything you really wanted out of the pool,
   while it's still available.
  - choosing that disk as the first to replace, and scrubbing again
   after replacing onto it, perhaps twice.
  - doing a dd to overwrite that entire disk with random data and let
   it remap bad sectors, before the replace (not just zeros, and not
   just the sectors a zfs resilver would hit. openssl enc of /dev/zero
   with a lightweight cipher and whatever key; for extra caution read
   back and compare with a second openssl stream using the same key)
  - being generally very watchful and suspicious of that disk in
   particular, look at error logs for clues, etc.


 Very thorough. I have no idea how to do that with openssl, but I will look
 into learning this.


  - being very happy that zfs deals so well with all this abuse, and
   you know your data is ok.


 Yes indeed - very happy.



  I have no idea what happened to the one disk, but No known data errors
 is
  what makes me happy. I'm not sure if I should be concerned about the
  physical disk itself

 given that it's reported disk errors as well as damaged content, yes.


 Okay. Well, it's a brand-new disk and I can exchange it easily enough.



  or just assume that some data got screwed up with all
  this mess. I guess maybe I'll see how the disk behaves during the
 replace
  operations (restoring to it and then restoring from it four times seems
 like
  a pretty good test of it), and if it continues to error, replace the
  physical drive and if necessary restore from the original truecrypt
 volumes.

 Good plan; note the extra scrubs at key points in the process above.


 Will do. Thanks for the tip.



  So, current plan:
  - export the pool.

 shouldn't be needed; zpool offline dev would be enough

  - format c9t1d0 to have one slice being the entire disk.

 Might not have been needed, but given Victor's comments about reserved
 space, you may need to do this manually, yes.  Be sure to use EFI
 labels.  Pick the suspect disk first.

  - import. should be degraded, missing c9t1d0p0.

 no need if you didn't export

  - replace missing c9t1d0p0 with c9t1d0

 yup, or if you've manually partitioned you may need to mention the
 slice number to prevent it repartitioning with the default reserved
 space again. You may even need to use some other slice (s5 or
 whatever), but I don't think so.

  - wait for resilver.
  - repeat with the other four disks.

  - tell us how it went
  - drink beer.

 --
 Dan.


 Okay. Plan is updated to reflect your suggestions. Beer was already in the
 plan, but I forgot to list it. Speaking of which, I see your e-mail address
 is .au, but if you're ever in new york city I'd love to buy you a beer as
 thanks for all your excellent help with this. And anybody else in this
 thread - you guys are awesome.

 -Ethan


Update: I'm stuck. Again.

To answer For curiosity's sake, what happens when you remove (rename) your
dir with the symlinks?: it finds the devices on p0 with no problem, with
the symlinks directory deleted.

After clearing the errors and scrubbing again, no errors were encountered in
the second scrub. Then I offlined the disk which had errors in the first
scrub.

I followed the suggestion to thoroughly test the disk (and remap any bad
sectors), filling it with random-looking data by encrypting /dev/zero.
Reading back and decrypting the drive, it all read back as zeros - all
good.

I then checked the SMART status of the drive, which had 0 error rates for
everything. I ran the several-hour extended self-test, whatever that does,
after which I had two write errors on one drive which weren't there 

Re: [zfs-discuss] l2arc current usage (population size)

2010-02-21 Thread Tomas Ögren
On 21 February, 2010 - Richard Elling sent me these 1,3K bytes:

 On Feb 21, 2010, at 9:18 AM, Tomas Ögren wrote:
 
  On 21 February, 2010 - Felix Buenemann sent me these 0,7K bytes:
  
  Am 20.02.10 03:22, schrieb Tomas Ögren:
  On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes:
  How do you tell how much of your l2arc is populated? I've been looking 
  for a while now, can't seem to find it.
  
  Must be easy, as this blog entry shows it over time:
  
  http://blogs.sun.com/brendan/entry/l2arc_screenshots
  
  And follow up, can you tell how much of each data set is in the arc or 
  l2arc?
  
  kstat -m zfs
  (p, c, l2arc_size)
  
  arc_stat.pl is good, but doesn't show l2arc..
  
  zpool iostat -v poolname would also do the trick for l2arc.
  
  No, it will show how much of the disk has been visited (dirty blocks)
  but not how much it occupies right now. At least very obvious difference
  if you add a zvol as cache..
  
  If it had supported TRIM or similar, they would probably be about the
  same though.
 
 Don't confuse the ZIL with L2ARC.  TRIM will do little for L2ARC devices.

I was mostly thinking about the telling the backing device that block X
isn't in use anymore, not the performance part.. If I have an L2ARC
backed by a zvol without compression, the used size will grow until
it's full, even if L2ARC doesn't use all of it currently.

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] More performance questions [on zfs over nfs]

2010-02-21 Thread Harry Putnam

Working from a remote linux machine on a zfs fs that is an nfs mounted
share (set for nfs availability on zfs server, mounted nfs on linux);
I've been noticing a certain kind of sloth when messing with files.

What I see:  After writing a file it seems to take the fs too long to
be able to display the size correctly (with du).

I'm wondering if I should be using some special flags either on the
osol end or the linux end.

Here is an example:

 rm -f file1; sleep 2;awk 'NR 115000{print}' debug.log.6 file1;du -sh 
file1;sleep 3;du -sh file1;sleep 3; du -sh file1; ls -l file1; sleep 10; du -sh 
file1

  512 file1 tested right away with du -sh
  512 file1 checked after 3 seconds
  512 file1 checked after 6 seconds
   long `ls -l' just to show the size
du is failing to see
  -rw-r--r-- 1 nobody nobody 10831062 Feb 21 12:33 file
  512 file1 final check after 16 seconds

Some time later the correct size becomes available... not sure how
much.

----   ---=---   -   

If that line above gets too wrapped to understand easily it is:

## clear any file
  rm -f file

## brief sleep
  sleep 2

## write a lot of lines to a file from some log file
 awk 'NR 115000{print}' debug.log.6 file
 
## check the size right away
 du -sh file

## sleep 3 seconds
 sleep 3

## check size again
 du -sh file

## sleep 3 more seconds
 sleep 3

## check size again
 du -sh file

## sleep a final 5 seconds
 sleep 10

## Check on last time
 du -sh file

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Listing snapshots in a pool

2010-02-21 Thread David Dyer-Bennet

I thought this was simple.  Turns out not to be.

bash-3.2$ zfs list -t snapshot zp1
cannot open 'zp1': operation not applicable to datasets of this type

Fails equally on all the variants of pool name that I've tried, 
including zp1/ and zp1/@ and such.


You can do zfs list -t snapshot and get a list of all snapshots in all 
pools.  You can do zfs list -r -t snapshot zp1 and get a recursive 
list of snapshots in zp1.  But you can't, with any options I've tried, 
get a list of top-level snapshots in a given pool.  (It's easy, of 
course, with grep, to get the bigger list and then filter out the subset 
you want).


Am I missing something?  Has this been added after snv_111b?

--
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Listing snapshots in a pool

2010-02-21 Thread Dave

Try:

zfs list -r -t snapshot zp1

--
Dave

On 2/21/10 5:23 PM, David Dyer-Bennet wrote:

I thought this was simple.  Turns out not to be.

bash-3.2$ zfs list -t snapshot zp1
cannot open 'zp1': operation not applicable to datasets of this type

Fails equally on all the variants of pool name that I've tried,
including zp1/ and zp1/@ and such.

You can do zfs list -t snapshot and get a list of all snapshots in all
pools. You can do zfs list -r -t snapshot zp1 and get a recursive list
of snapshots in zp1. But you can't, with any options I've tried, get a
list of top-level snapshots in a given pool. (It's easy, of course, with
grep, to get the bigger list and then filter out the subset you want).

Am I missing something? Has this been added after snv_111b?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] More performance questions [on zfs over nfs]

2010-02-21 Thread Henrik Johansson

On Feb 21, 2010, at 7:47 PM, Harry Putnam wrote:

 
 Working from a remote linux machine on a zfs fs that is an nfs mounted
 share (set for nfs availability on zfs server, mounted nfs on linux);
 I've been noticing a certain kind of sloth when messing with files.
 
 What I see:  After writing a file it seems to take the fs too long to
 be able to display the size correctly (with du).

You will not see the on disk size of the file with du before the transaction 
group have been committed which can take up to 30 seconds. ZFS does not even 
know how much space it will consume before writing out the data to disks since 
compression might be enabled. You can test this by executing sync(1M) on your 
file server, when it returns you should have the final size of the file.

Regards

Henrik
http://sparcv9.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Listing snapshots in a pool

2010-02-21 Thread David Dyer-Bennet

On 2/21/2010 7:33 PM, Dave wrote:

Try:

zfs list -r -t snapshot zp1


I hate to sound ungrateful; but what you suggest I try is something that 
I listed in my message as having *already* tried.


Still down there in the quotes, you can see it.  I listed two ways to 
get a superset of what I wanted, and pointed out that grep would then 
let you whittle it down, so I can make my script work.


I'm still looking for a cleaner way, because it seems like a weird thing 
not to be able to get directly from zfs list.  A solution was proposed 
to me in email that works in later versions, but wasn't in OpenSolaris 
2009.06: use the -d switch and set recursion to 1 level.  That seems 
like it will work, when a stable version including that option is 
released.  I'll note it in my code for use then.




On 2/21/10 5:23 PM, David Dyer-Bennet wrote:

I thought this was simple.  Turns out not to be.

bash-3.2$ zfs list -t snapshot zp1
cannot open 'zp1': operation not applicable to datasets of this type

Fails equally on all the variants of pool name that I've tried,
including zp1/ and zp1/@ and such.

You can do zfs list -t snapshot and get a list of all snapshots in all
pools. You can do zfs list -r -t snapshot zp1 and get a recursive list
of snapshots in zp1. But you can't, with any options I've tried, get a
list of top-level snapshots in a given pool. (It's easy, of course, with
grep, to get the bigger list and then filter out the subset you want).

Am I missing something? Has this been added after snv_111b?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



--
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS dedup for VAX COFF data type

2010-02-21 Thread N
Hi Any idea why zfs does not dedup files with this format ?
file /opt/XXX/XXX/data
 VAX COFF executable - version 7926
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] More performance questions [on zfs over nfs]

2010-02-21 Thread Bob Friesenhahn

On Mon, 22 Feb 2010, Henrik Johansson wrote:

You will not see the on disk size of the file with du before the transaction 
group have been committed
which can take up to 30 seconds. ZFS does not even know how much space it will 
consume before writing out
the data to disks since compression might be enabled. You can test this by 
executing sync(1M) on your file
server, when it returns you should have the final size of the file.


Even more interesting, if the file grows to 1GB and is re-written 5 
times and is then truncated to a smaller size (using ftruncate()) 
before the next transaction group is written, then all of that 
transient activity at the larger size is as if it never happened.


Eventually this is seen as a blessing.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dedup for VAX COFF data type

2010-02-21 Thread Adam Leventhal
 Hi Any idea why zfs does not dedup files with this format ?
 file /opt/XXX/XXX/data
 VAX COFF executable - version 7926

With dedup enabled, ZFS will identify and remove duplicated regardless of the 
data format.

Adam

--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2010-02-21 Thread Ralf Gans
Hello out there,

is there any progress in shrinking zpools?
i.e. removing vdevs from a pool?

Cheers,

Ralf
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dedup for VAX COFF data type

2010-02-21 Thread N
I have these exact same files placed under zfs with dedup enabled:

-rw-r- 1 root root 6800152 Feb 21 20:17 /mypool/test1/data
-rw-r- 1 root root 6800152 Feb 21 20:17 /mypool/test2/data

~ # zpool list
NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
mypool  1.98T  14.7G  1.97T 0%  1.00x  ONLINE  -

Any idea why zfs did not dedup in this case ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dedup for VAX COFF data type

2010-02-21 Thread N
Nevermind. I just tried larger files and now it does shows a dedup-% status 
change..

-rw-r- 1 root root 2097157724 Feb 21 20:58 /mypool/test1/data
-rw-r- 1 root root 2097157724 Feb 21 20:58 /mypool/test2/data

NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
mypool  1.98T  16.8G  1.97T 0%  1.12x  ONLINE  -

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sharing Issues

2010-02-21 Thread Tau
I am having a bit of an issue I have an opensolaris box setup as a fileserver.  
Running through CIFS to provide shares to some windows machines.

Now lets call my zpool /tank1, when i create a zfs filesystem called /test it 
gets shared as /test and i can see it as test on my windows machines...  Now 
when i create a child system inside the test system (lets call this 
/tank1/test/child) the child system gets shared as well on its own as 
test_child as seen on the windows system.

I want to be able to create  nested filesystems, and not have the nested 
systems shared through cifs  i want to access them through the root system, 
and only have the root systems shared to the windows machines...

I have been trolling through the manuals, and forums, but cant seem to find the 
ansear.

I'm sure im missing something simple, can someone shed some light onto this 
issue?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sharing Issues

2010-02-21 Thread Frank Cusack

On 2/21/10 11:08 PM -0800 Tau wrote:

I am having a bit of an issue I have an opensolaris box setup as a
fileserver.  Running through CIFS to provide shares to some windows
machines.

Now lets call my zpool /tank1,


Let's not because '/' is an illegal character in a zpool name.


   when i create a zfs filesystem called
/test it gets shared as /test and i can see it as test on my windows
machines...  Now when i create a child system inside the test system
(lets call this /tank1/test/child) the child system gets shared as well
on its own as test_child as seen on the windows system.

I want to be able to create  nested filesystems, and not have the nested
systems shared through cifs  i want to access them through the root
system, and only have the root systems shared to the windows machines...


You're saying system as if it's a shorthand for filesystem.  It isn't.
And technically, for zfs you call them datasets but filesystem is ok.

Does simply setting sharesmb=none not work?  By default, descendant
filesystems inherit the properties of the parent, including share
properties.  So for each child filesystem you don't want to share,
you would have to override the default inherited sharesmb property.

What you should probably do is set an ACL to disallow access to the
child filesystems.  Because even if there is a sharesmb setting that
blocks sharing of a child, what happens then is that the client
accessing the parent can still write into the directory which holds
the mount point for the child, with the write going to the parent,
and on the fileserver you can't see data that the client has written
there because it is masked by the mounted child filesystem.  This
creates all sorts of problems.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss