Re: [HEADSUP] ZFS version 15 committed to head

2010-07-17 Thread Marco van Lienen
On Tue, Jul 13, 2010 at 04:02:42PM +0200, you (Martin Matuska) sent the 
following to the -current list:
  Dear community,
 
 Feel free to test everything and don't forget to report any bugs found.

When I create a raidz pool of 3 equally sized hdd's (3x2Tb WD caviar green 
drives) the reported available space by zpool and zfs is VERY different (not 
just the known differences).

On a 9.0-CURRENT amd64 box:

# uname -a
FreeBSD trinity.lordsith.net 9.0-CURRENT FreeBSD 9.0-CURRENT #1: Tue Jul 13 
21:58:14 UTC 2010 r...@trinity.lordsith.net:/usr/obj/usr/src/sys/trinity  
amd64

# zpool create pool1 raidz ada2 ada3 ada4
# zpool list pool1
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
pool1  5.44T   147K  5.44T 0%  ONLINE  -

# ada drives dmesg output:
ada2 at ahcich4 bus 0 scbus5 target 0 lun 0
ada2: WDC WD20EARS-00MVWB0 50.0AB50 ATA-8 SATA 2.x device
ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada3 at ahcich5 bus 0 scbus6 target 0 lun 0
ada3: WDC WD20EARS-00MVWB0 50.0AB50 ATA-8 SATA 2.x device
ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada3: Command Queueing enabled
ada3: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada4 at ahcich6 bus 0 scbus7 target 0 lun 0
ada4: WDC WD20EADS-11R6B1 80.00A80 ATA-8 SATA 2.x device
ada4: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada4: Command Queueing enabled
ada4: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)

zfs list however only shows:
# zfs list pool1
NAMEUSED  AVAIL  REFER  MOUNTPOINT
pool1  91.9K  3.56T  28.0K  /pool1

I just lost the space of an entire hdd!

To rule out a possible drive issue I created a raidz pool based on 3 65m files.

# dd if=/dev/zero of=/file1 bs=1m count=65 
# dd if=/dev/zero of=/file2 bs=1m count=65 
# dd if=/dev/zero of=/file3 bs=1m count=65 
# zpool create test raidz /file1 /file2 /file3
#
# zpool list test
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
test   181M   147K   181M 0%  ONLINE  -
# zfs list test
NAME   USED  AVAIL  REFER  MOUNTPOINT
test  91.9K  88.5M  28.0K  /test

When I create a non-redundant storage pool using the same 3 files or 3 drives 
the available space reported by zfs is what I'm expecting to see though so it 
looks like creating a raidz storage pool is showing very weird behavior.

This doesn't have as much to do with the ZFS v15 bits commited to -HEAD since I 
have the exact same behavior on a 8.0-RELEASE-p2 i386 box with ZFS v14.

A friend of mine is running osol build 117 but he created his raidz pool on an 
even older build though.
His raidz pool also uses 3 equally-sized drives (3x2Tb) and his raidz pool is 
showing:

% zfs list -r pool2
NAMEUSED  AVAIL  REFER  MOUNTPOINT
pool2  3.32T  2.06T  3.18T  
/export/pool2
% df -h pool2
Filesystem size   used  avail capacity  Mounted on
pool2  5.4T   3.2T   2.1T61%/export/pool2

To run further tests he also created a test raidz pool using 3 65m files:

% zfs list test2
NAMEUSED  AVAIL  REFER  MOUNTPOINT
test2  73.5K   149M21K  /test2

So on osol build 117 the available space is what I'm expecting to see whereas 
on FreeBSD 9.0-CURRENT amd64 and 8.0-RELEASE-p2 i386 

Is someone having the same issues?

Cheers,
marco


pgpQwcquU4UgC.pgp
Description: PGP signature


Re: [HEADSUP] ZFS version 15 committed to head

2010-07-17 Thread Marco van Lienen
On Sat, Jul 17, 2010 at 12:25:56PM +0200, you (Stefan Bethke) sent the 
following to the -current list:
 Am 17.07.2010 um 12:14 schrieb Marco van Lienen:
 
  # zpool list pool1
  NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
  pool1  5.44T   147K  5.44T 0%  ONLINE  -
 ...
  zfs list however only shows:
  # zfs list pool1
  NAMEUSED  AVAIL  REFER  MOUNTPOINT
  pool1  91.9K  3.56T  28.0K  /pool1
  
  I just lost the space of an entire hdd!
 
 zpool always shows the raw capacity (without redundancy), zfs the actual 
 available capacity.

I have read many things about those differences, but why then does zfs on 
opensolaris report more available space whereas FreeBSD does not?
That would imply that my friend running osol build 117 couldn't fill up his 
raidz pool past the 3.56T.

marco


pgpQK6Bhz2NSv.pgp
Description: PGP signature


Re: RAIDZ capacity (was ZFS version 15 committed to head)

2010-07-17 Thread Marco van Lienen
On Sat, Jul 17, 2010 at 01:04:52PM +0200, you (Stefan Bethke) sent the 
following to the -current list:
  
  I have read many things about those differences, but why then does zfs on 
  opensolaris report more available space whereas FreeBSD does not?
  That would imply that my friend running osol build 117 couldn't fill up his 
  raidz pool past the 3.56T.
 
 You didn't show us how your friends pool is set up.
 
 With RAIDZ1, the capacity of one of the devices in the pool is used for 
 redundancy, with RAIDZ2 it's two disks worth.  So three 2TB disks with RAIDZ1 
 gives you 4TB net capacity.  If you don't care about redundancy, use a simple 
 concatenation, i. e. don't specify mirror, raidz or raidz2 when creating the 
 pool.

My friend created his raidz pool just the same way as I did: zpool create pool2 
raidz c0d0 c0d1 c0d2
So just 3 dedicated drives.

I also posted the example of creating a test raidz pool based on 3 65Mb files.
On osol there is more available space being reported by 'zfs list' on that test 
raidz pool
When I created a similar test raidz pool also based on 3 65Mb files, 'zfs list' 
on my FreeBSD boxes (9.0-CURRENT amd64 and 8.0-RELEASE-p2 i386) is showing much 
less available space.
So regardless whether we use whole disks or simply files for testing purposes, 
'zfs list' on the osol system is reporting more available space.

cheers,
marco


pgpvoeRVwVWU8.pgp
Description: PGP signature


Re: [HEADSUP] ZFS version 15 committed to head

2010-07-17 Thread Marco van Lienen
On Sat, Jul 17, 2010 at 10:12:10AM -0700, you (Freddie Cash) sent the following 
to the -current list:
 
  I have read many things about those differences, but why then does zfs on 
  opensolaris report more available space whereas FreeBSD does not?
  That would imply that my friend running osol build 117 couldn't fill up his 
  raidz pool past the 3.56T.
 
 You used different commands to check the disk space on OSol (zpool vs df).
 
 Try the same commands on both FreeBSD and OSol (zpool and zfs) and
 you'll see the same results.

I guess you missed my original mail of this thread in which I also showed the 
output of 'zfs list -r pool2' on osol where clearly there is more available 
space shown then on FreeBSD.

% zfs list -r pool2 
  
NAMEUSED  AVAIL  REFER  MOUNTPOINT  
  
pool2  3.32T  2.06T  3.18T  
/export/pool2

 
 df works differently on OSol than it does on FreeBSD, you can't compare them.

HTH

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org