[zfs-discuss] number of blocks changes

2012-08-03 Thread Justin Stringfellow
While this isn't causing me any problems, I'm curious as to why this is 
happening...:



$ dd if=/dev/random of=ob bs=128k count=1  while true
 do
 ls -s ob
 sleep 1
 done
0+1 records in
0+1 records out
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   4 ob    changes here
   4 ob
   4 ob
^C
$ ls -l ob
-rw-r--r--   1 justin staff   1040 Aug  3 09:28 ob

I was expecting the '1', since this is a zfs with recordsize=128k. Not sure I 
understand the '4', or why it happens ~30s later.  Can anyone distribute clue 
in my direction?


s10u10, running 144488-06 KU. zfs is v4, pool is v22.


cheers,
--justin



-bash-3.00$
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] number of blocks changes

2012-08-03 Thread Sašo Kiselkov
On 08/03/2012 03:18 PM, Justin Stringfellow wrote:
 While this isn't causing me any problems, I'm curious as to why this is 
 happening...:
 
 
 
 $ dd if=/dev/random of=ob bs=128k count=1  while true

Can you check whether this happens from /dev/urandom as well?

--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] number of blocks changes

2012-08-03 Thread Jim Klimov

2012-08-03 17:18, Justin Stringfellow пишет:

While this isn't causing me any problems, I'm curious as to why this is 
happening...:



$ dd if=/dev/random of=ob bs=128k count=1  while true

do
ls -s ob
sleep 1
done

0+1 records in
0+1 records out
1 ob

...


I was expecting the '1', since this is a zfs with recordsize=128k. Not sure I 
understand the '4', or why it happens ~30s later.  Can anyone distribute clue 
in my direction?


s10u10, running 144488-06 KU. zfs is v4, pool is v22.


I think for the cleanness of the experiment, you should also include
sync after the dd's, to actually commit your file to the pool.
What is the pool's redundancy setting?

I am not sure what ls -s actually accounts for file's FS-block
usage, but I wonder if it might include metadata (relevant pieces of
the block pointer tree individual to the file). Also check if the
disk usage reported by du -k ob varies similarly, for the fun of it?

//Jim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-08-03 Thread Richard Elling
On Aug 2, 2012, at 5:40 PM, Nigel W wrote:

 On Thu, Aug 2, 2012 at 3:39 PM, Richard Elling richard.ell...@gmail.com 
 wrote:
 On Aug 1, 2012, at 8:30 AM, Nigel W wrote:
 
 
 Yes. +1
 
 The L2ARC as is it currently implemented is not terribly useful for
 storing the DDT in anyway because each DDT entry is 376 bytes but the
 L2ARC reference is 176 bytes, so best case you get just over double
 the DDT entries in the L2ARC as what you would get into the ARC but
 then you have also have no ARC left for anything else :(.
 
 
 You are making the assumption that each DDT table entry consumes one
 metadata update. This is not the case. The DDT is implemented as an AVL
 tree. As per other metadata in ZFS, the data is compressed. So you cannot
 make a direct correlation between the DDT entry size and the affect on the
 stored metadata on disk sectors.
 -- richard
 
 It's compressed even when in the ARC?


That is a slightly odd question. The ARC contains ZFS blocks. DDT metadata is 
manipulated in memory as an AVL tree, so what you can see in the ARC is the
metadata blocks that were read and uncompressed from the pool or packaged
in blocks and written to the pool. Perhaps it is easier to think of them as 
metadata
in transition? :-)
 -- richard

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] what have you been buying for slog and l2arc?

2012-08-03 Thread Karl Rossing
I'm looking at 
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-ssd.html 
wondering what I should get.


Are people getting intel 330's for l2arc and 520's for slog?

Karl




CONFIDENTIALITY NOTICE:  This communication (including all attachments) is
confidential and is intended for the use of the named addressee(s) only and
may contain information that is private, confidential, privileged, and
exempt from disclosure under law.  All rights to privilege are expressly
claimed and reserved and are not waived.  Any use, dissemination,
distribution, copying or disclosure of this message and any attachments, in
whole or in part, by anyone other than the intended recipient(s) is strictly
prohibited.  If you have received this communication in error, please notify
the sender immediately, delete this communication from all data storage
devices and destroy all hard copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Missing disk space

2012-08-03 Thread Burt Hailey
I seem to be missing a large amount of disk space and am not sure how to
locate it.  My pool has a total of 1.9TB of disk space.  When I run df  -k I
see that the pool is using ~650GB of space and has only ~120GB available.
Running zfs list shows that my pool (localpool) is using 1.67T.  When I
total up the amount of snapshots I see that they are using 250GB.  Unless
I'm missing something it appears that there is ~750GB of disk space that is
unaccounted for.  We do hourly snapshots.   Two days ago I deleted 100GB of
data and did not see a corresponding increase in snapshot sizes.  I'm new to
zfs and am reading the zfs admin handbook but I wanted to post this to get
some suggestions on what to look at.

 

Burt Hailey

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Missing disk space

2012-08-03 Thread Cindy Swearingen

You said you're new to ZFS so might consider using zpool list
and zfs list rather df -k to reconcile your disk space.

In addition, your pool type (mirrored on RAIDZ) provides a
different space perspective in zpool list that is not always
easy to understand.

http://docs.oracle.com/cd/E23824_01/html/E24456/filesystem-6.html#scrolltoc

See these sections:

Displaying ZFS File System Information
Resolving ZFS File System Space Reporting Issues

Let us know if this doesn't help.

Thanks,

Cindy

On 08/03/12 16:00, Burt Hailey wrote:

I seem to be missing a large amount of disk space and am not sure how to
locate it. My pool has a total of 1.9TB of disk space. When I run df -k
I see that the pool is using ~650GB of space and has only ~120GB
available. Running zfs list shows that my pool (localpool) is using
1.67T. When I total up the amount of snapshots I see that they are using
250GB. Unless I’m missing something it appears that there is ~750GB of
disk space that is unaccounted for. We do hourly snapshots. Two days ago
I deleted 100GB of data and did not see a corresponding increase in
snapshot sizes. I’m new to zfs and am reading the zfs admin handbook but
I wanted to post this to get some suggestions on what to look at.

Burt Hailey



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-03 Thread Bob Friesenhahn

On Fri, 3 Aug 2012, Karl Rossing wrote:

I'm looking at 
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-ssd.html 
wondering what I should get.


Are people getting intel 330's for l2arc and 520's for slog?


For the slog, you should look for a SLC technology SSD which saves 
unwritten data on power failure.  In Intel-speak, this is called 
Enhanced Power Loss Data Protection.  I am not running across any 
Intel SSDs which claim to match these requirements.


Extreme write IOPS claims in consumer SSDs are normally based on large 
write caches which can lose even more data if there is a power 
failure.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-03 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
Intel 311 Series Larsen Creek 20GB 2.5 SATA II SLC Enterprise Solid State Disk 
SSDSA2VP020G201

Average Rating
(12 reviews)
Write a Review

Sent from my iPad

On Aug 3, 2012, at 21:39, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:

 On Fri, 3 Aug 2012, Karl Rossing wrote:
 
 I'm looking at 
 http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-ssd.html
  wondering what I should get.
 
 Are people getting intel 330's for l2arc and 520's for slog?
 
 For the slog, you should look for a SLC technology SSD which saves unwritten 
 data on power failure.  In Intel-speak, this is called Enhanced Power Loss 
 Data Protection.  I am not running across any Intel SSDs which claim to 
 match these requirements.
 
 Extreme write IOPS claims in consumer SSDs are normally based on large write 
 caches which can lose even more data if there is a power failure.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-03 Thread Neil Perrin

On 08/03/12 19:39, Bob Friesenhahn wrote:

On Fri, 3 Aug 2012, Karl Rossing wrote:


I'm looking at 
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-ssd.html
 wondering what I should get.

Are people getting intel 330's for l2arc and 520's for slog?


For the slog, you should look for a SLC technology SSD which saves unwritten data on 
power failure.  In Intel-speak, this is called Enhanced Power Loss Data 
Protection.  I am not running across any Intel SSDs which claim to match these 
requirements.


- That shouldn't be necessary. ZFS flushes the write cache for any device 
written before returning
from the synchronous request to ensure data stability.




Extreme write IOPS claims in consumer SSDs are normally based on large write 
caches which can lose even more data if there is a power failure.

Bob


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss