Re: [zfs-discuss] number of blocks changes

2012-08-06 Thread Justin Stringfellow


Can you check whether this happens from /dev/urandom as well?

It does:

finsdb137@root dd if=/dev/urandom of=oub bs=128k count=1  while true
 do
 ls -s oub
 sleep 1
 done
0+1 records in
0+1 records out
   1 oub
   1 oub
   1 oub
   1 oub
   1 oub
   4 oub
   4 oub
   4 oub
   4 oub
   4 oub

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] number of blocks changes

2012-08-06 Thread Justin Stringfellow


 I think for the cleanness of the experiment, you should also include
sync after the dd's, to actually commit your file to the pool.

OK that 'fixes' it:

finsdb137@root dd if=/dev/random of=ob bs=128k count=1  sync  while true
 do
 ls -s ob
 sleep 1
 done
0+1 records in
0+1 records out
   4 ob
   4 ob
   4 ob
.. etc.

I guess I knew this had something to do with stuff being flushed to disk, I 
don't know why I didn't think of it myself.

 What is the pool's redundancy setting?
copies=1. Full zfs get below, but in short, it's a basic mirrored root with 
default settings. Hmm, maybe I should mirror root with copies=2.    ;)

 I am not sure what ls -s actually accounts for file's FS-block
usage, but I wonder if it might include metadata (relevant pieces of
the block pointer tree individual to the file). Also check if the
disk usage reported by du -k ob varies similarly, for the fun of it?

Yes, it varies too.

finsdb137@root dd if=/dev/random of=ob bs=128k count=1  while true
 do
 ls -s ob
 du -k ob
 sleep 1
 done
0+1 records in
0+1 records out
   1 ob
0   ob
   1 ob
0   ob
   1 ob
0   ob
   1 ob
0   ob
   4 ob
2   ob
   4 ob
2   ob
   4 ob
2   ob
   4 ob
2   ob
   4 ob
2   ob






finsdb137@root zfs get all rpool/ROOT/s10s_u9wos_14a
NAME   PROPERTY  VALUE  SOURCE
rpool/ROOT/s10s_u9wos_14a  type  filesystem -
rpool/ROOT/s10s_u9wos_14a  creation  Tue Mar  1 15:09 2011  -
rpool/ROOT/s10s_u9wos_14a  used  20.6G  -
rpool/ROOT/s10s_u9wos_14a  available 37.0G  -
rpool/ROOT/s10s_u9wos_14a  referenced    20.6G  -
rpool/ROOT/s10s_u9wos_14a  compressratio 1.00x  -
rpool/ROOT/s10s_u9wos_14a  mounted   yes    -
rpool/ROOT/s10s_u9wos_14a  quota none   default
rpool/ROOT/s10s_u9wos_14a  reservation   none   default
rpool/ROOT/s10s_u9wos_14a  recordsize    128K   default
rpool/ROOT/s10s_u9wos_14a  mountpoint    /  local
rpool/ROOT/s10s_u9wos_14a  sharenfs  off    default
rpool/ROOT/s10s_u9wos_14a  checksum  on default
rpool/ROOT/s10s_u9wos_14a  compression   off    default
rpool/ROOT/s10s_u9wos_14a  atime on default
rpool/ROOT/s10s_u9wos_14a  devices   on default
rpool/ROOT/s10s_u9wos_14a  exec  on default
rpool/ROOT/s10s_u9wos_14a  setuid    on default
rpool/ROOT/s10s_u9wos_14a  readonly  off    default
rpool/ROOT/s10s_u9wos_14a  zoned off    default
rpool/ROOT/s10s_u9wos_14a  snapdir   hidden default
rpool/ROOT/s10s_u9wos_14a  aclmode   groupmask  default
rpool/ROOT/s10s_u9wos_14a  aclinherit    restricted default
rpool/ROOT/s10s_u9wos_14a  canmount  noauto local
rpool/ROOT/s10s_u9wos_14a  shareiscsi    off    default
rpool/ROOT/s10s_u9wos_14a  xattr on default
rpool/ROOT/s10s_u9wos_14a  copies    1  default
rpool/ROOT/s10s_u9wos_14a  version   3  -
rpool/ROOT/s10s_u9wos_14a  utf8only  off    -
rpool/ROOT/s10s_u9wos_14a  normalization none   -
rpool/ROOT/s10s_u9wos_14a  casesensitivity   sensitive  -
rpool/ROOT/s10s_u9wos_14a  vscan off    default
rpool/ROOT/s10s_u9wos_14a  nbmand    off    default
rpool/ROOT/s10s_u9wos_14a  sharesmb  off    default
rpool/ROOT/s10s_u9wos_14a  refquota  none   default
rpool/ROOT/s10s_u9wos_14a  refreservation    none   default
rpool/ROOT/s10s_u9wos_14a  primarycache  all    default
rpool/ROOT/s10s_u9wos_14a  secondarycache    all    default
rpool/ROOT/s10s_u9wos_14a  usedbysnapshots   0  -
rpool/ROOT/s10s_u9wos_14a  usedbydataset 20.6G  -
rpool/ROOT/s10s_u9wos_14a  usedbychildren    0  -
rpool/ROOT/s10s_u9wos_14a  usedbyrefreservation  0  -
rpool/ROOT/s10s_u9wos_14a  logbias   latency    default
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] number of blocks changes

2012-08-03 Thread Justin Stringfellow
While this isn't causing me any problems, I'm curious as to why this is 
happening...:



$ dd if=/dev/random of=ob bs=128k count=1  while true
 do
 ls -s ob
 sleep 1
 done
0+1 records in
0+1 records out
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   1 ob
   4 ob    changes here
   4 ob
   4 ob
^C
$ ls -l ob
-rw-r--r--   1 justin staff   1040 Aug  3 09:28 ob

I was expecting the '1', since this is a zfs with recordsize=128k. Not sure I 
understand the '4', or why it happens ~30s later.  Can anyone distribute clue 
in my direction?


s10u10, running 144488-06 KU. zfs is v4, pool is v22.


cheers,
--justin



-bash-3.00$
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] number of blocks changes

2012-08-03 Thread Sašo Kiselkov
On 08/03/2012 03:18 PM, Justin Stringfellow wrote:
 While this isn't causing me any problems, I'm curious as to why this is 
 happening...:
 
 
 
 $ dd if=/dev/random of=ob bs=128k count=1  while true

Can you check whether this happens from /dev/urandom as well?

--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] number of blocks changes

2012-08-03 Thread Jim Klimov

2012-08-03 17:18, Justin Stringfellow пишет:

While this isn't causing me any problems, I'm curious as to why this is 
happening...:



$ dd if=/dev/random of=ob bs=128k count=1  while true

do
ls -s ob
sleep 1
done

0+1 records in
0+1 records out
1 ob

...


I was expecting the '1', since this is a zfs with recordsize=128k. Not sure I 
understand the '4', or why it happens ~30s later.  Can anyone distribute clue 
in my direction?


s10u10, running 144488-06 KU. zfs is v4, pool is v22.


I think for the cleanness of the experiment, you should also include
sync after the dd's, to actually commit your file to the pool.
What is the pool's redundancy setting?

I am not sure what ls -s actually accounts for file's FS-block
usage, but I wonder if it might include metadata (relevant pieces of
the block pointer tree individual to the file). Also check if the
disk usage reported by du -k ob varies similarly, for the fun of it?

//Jim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss