Re: [HEADSUP] ZFS version 15 committed to head
On Sat, 17 Jul 2010 12:51:34 +0200 Marco van Lienen wrote: > On Sat, Jul 17, 2010 at 12:25:56PM +0200, you (Stefan Bethke) sent > the following to the -current list: > > Am 17.07.2010 um 12:14 schrieb Marco van Lienen: > > > > > # zpool list pool1 > > > NAMESIZE USED AVAILCAP HEALTH ALTROOT > > > pool1 5.44T 147K 5.44T 0% ONLINE - > > ... > > > zfs list however only shows: > > > # zfs list pool1 > > > NAMEUSED AVAIL REFER MOUNTPOINT > > > pool1 91.9K 3.56T 28.0K /pool1 > > > > > > I just lost the space of an entire hdd! > > > > zpool always shows the raw capacity (without redundancy), zfs the > > actual available capacity. > > I have read many things about those differences, but why then does > zfs on opensolaris report more available space whereas FreeBSD does > not? That would imply that my friend running osol build 117 couldn't > fill up his raidz pool past the 3.56T. If you compare the yfs list output of OSol and FreeBSD and they differ where they shouldn't, you should have a look if compression and/or deduplication (were available) is activated. Bye, Alexander. ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: [HEADSUP] ZFS version 15 committed to head
On Sat, Jul 17, 2010 at 11:21 AM, Marco van Lienen wrote: > On Sat, Jul 17, 2010 at 10:12:10AM -0700, you (Freddie Cash) sent the > following to the -current list: >> > >> > I have read many things about those differences, but why then does zfs on >> > opensolaris report more available space whereas FreeBSD does not? >> > That would imply that my friend running osol build 117 couldn't fill up >> > his raidz pool past the 3.56T. >> >> You used different commands to check the disk space on OSol (zpool vs df). >> >> Try the same commands on both FreeBSD and OSol (zpool and zfs) and >> you'll see the same results. > > I guess you missed my original mail of this thread in which I also showed the > output of 'zfs list -r pool2' on osol where clearly there is more available > space shown then on FreeBSD. > > % zfs list -r pool2 > NAME USED AVAIL REFER MOUNTPOINT > pool2 3.32T 2.06T 3.18T > /export/pool2 No, I saw that. But you compared zpool and zfs output on FreeBSD, and zfs and df output on OSol. IOW, you didn't compare the same things. Compare the output of zpool and zfs on both FreeBSD and OSol, it should be the same. -- Freddie Cash fjwc...@gmail.com ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: [HEADSUP] ZFS version 15 committed to head
On Sat, Jul 17, 2010 at 10:12:10AM -0700, you (Freddie Cash) sent the following to the -current list: > > > > I have read many things about those differences, but why then does zfs on > > opensolaris report more available space whereas FreeBSD does not? > > That would imply that my friend running osol build 117 couldn't fill up his > > raidz pool past the 3.56T. > > You used different commands to check the disk space on OSol (zpool vs df). > > Try the same commands on both FreeBSD and OSol (zpool and zfs) and > you'll see the same results. I guess you missed my original mail of this thread in which I also showed the output of 'zfs list -r pool2' on osol where clearly there is more available space shown then on FreeBSD. % zfs list -r pool2 NAMEUSED AVAIL REFER MOUNTPOINT pool2 3.32T 2.06T 3.18T /export/pool2 > > df works differently on OSol than it does on FreeBSD, you can't compare them. HTH ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: [HEADSUP] ZFS version 15 committed to head
On Sat, Jul 17, 2010 at 3:51 AM, Marco van Lienen wrote: > On Sat, Jul 17, 2010 at 12:25:56PM +0200, you (Stefan Bethke) sent the > following to the -current list: >> Am 17.07.2010 um 12:14 schrieb Marco van Lienen: >> >> > # zpool list pool1 >> > NAME SIZE USED AVAIL CAP HEALTH ALTROOT >> > pool1 5.44T 147K 5.44T 0% ONLINE - >> ... >> > zfs list however only shows: >> > # zfs list pool1 >> > NAME USED AVAIL REFER MOUNTPOINT >> > pool1 91.9K 3.56T 28.0K /pool1 >> > >> > I just lost the space of an entire hdd! >> >> zpool always shows the raw capacity (without redundancy), zfs the actual >> available capacity. > > I have read many things about those differences, but why then does zfs on > opensolaris report more available space whereas FreeBSD does not? > That would imply that my friend running osol build 117 couldn't fill up his > raidz pool past the 3.56T. You used different commands to check the disk space on OSol (zpool vs df). Try the same commands on both FreeBSD and OSol (zpool and zfs) and you'll see the same results. df works differently on OSol than it does on FreeBSD, you can't compare them. -- Freddie Cash fjwc...@gmail.com ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: [HEADSUP] ZFS version 15 committed to head
On Sat, Jul 17, 2010 at 12:25:56PM +0200, you (Stefan Bethke) sent the following to the -current list: > Am 17.07.2010 um 12:14 schrieb Marco van Lienen: > > > # zpool list pool1 > > NAMESIZE USED AVAILCAP HEALTH ALTROOT > > pool1 5.44T 147K 5.44T 0% ONLINE - > ... > > zfs list however only shows: > > # zfs list pool1 > > NAMEUSED AVAIL REFER MOUNTPOINT > > pool1 91.9K 3.56T 28.0K /pool1 > > > > I just lost the space of an entire hdd! > > zpool always shows the raw capacity (without redundancy), zfs the actual > available capacity. I have read many things about those differences, but why then does zfs on opensolaris report more available space whereas FreeBSD does not? That would imply that my friend running osol build 117 couldn't fill up his raidz pool past the 3.56T. marco pgpQK6Bhz2NSv.pgp Description: PGP signature
Re: [HEADSUP] ZFS version 15 committed to head
Am 17.07.2010 um 12:14 schrieb Marco van Lienen: > # zpool list pool1 > NAMESIZE USED AVAILCAP HEALTH ALTROOT > pool1 5.44T 147K 5.44T 0% ONLINE - ... > zfs list however only shows: > # zfs list pool1 > NAMEUSED AVAIL REFER MOUNTPOINT > pool1 91.9K 3.56T 28.0K /pool1 > > I just lost the space of an entire hdd! zpool always shows the raw capacity (without redundancy), zfs the actual available capacity. Stefan -- Stefan BethkeFon +49 151 14070811 ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: [HEADSUP] ZFS version 15 committed to head
On Tue, Jul 13, 2010 at 04:02:42PM +0200, you (Martin Matuska) sent the following to the -current list: > Dear community, > > Feel free to test everything and don't forget to report any bugs found. When I create a raidz pool of 3 equally sized hdd's (3x2Tb WD caviar green drives) the reported available space by zpool and zfs is VERY different (not just the known differences). On a 9.0-CURRENT amd64 box: # uname -a FreeBSD trinity.lordsith.net 9.0-CURRENT FreeBSD 9.0-CURRENT #1: Tue Jul 13 21:58:14 UTC 2010 r...@trinity.lordsith.net:/usr/obj/usr/src/sys/trinity amd64 # zpool create pool1 raidz ada2 ada3 ada4 # zpool list pool1 NAMESIZE USED AVAILCAP HEALTH ALTROOT pool1 5.44T 147K 5.44T 0% ONLINE - # ada drives dmesg output: ada2 at ahcich4 bus 0 scbus5 target 0 lun 0 ada2: ATA-8 SATA 2.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada2: Command Queueing enabled ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada3 at ahcich5 bus 0 scbus6 target 0 lun 0 ada3: ATA-8 SATA 2.x device ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada3: Command Queueing enabled ada3: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada4 at ahcich6 bus 0 scbus7 target 0 lun 0 ada4: ATA-8 SATA 2.x device ada4: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada4: Command Queueing enabled ada4: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) zfs list however only shows: # zfs list pool1 NAMEUSED AVAIL REFER MOUNTPOINT pool1 91.9K 3.56T 28.0K /pool1 I just lost the space of an entire hdd! To rule out a possible drive issue I created a raidz pool based on 3 65m files. # dd if=/dev/zero of=/file1 bs=1m count=65 # dd if=/dev/zero of=/file2 bs=1m count=65 # dd if=/dev/zero of=/file3 bs=1m count=65 # zpool create test raidz /file1 /file2 /file3 # # zpool list test NAME SIZE USED AVAILCAP HEALTH ALTROOT test 181M 147K 181M 0% ONLINE - # zfs list test NAME USED AVAIL REFER MOUNTPOINT test 91.9K 88.5M 28.0K /test When I create a non-redundant storage pool using the same 3 files or 3 drives the available space reported by zfs is what I'm expecting to see though so it looks like creating a raidz storage pool is showing very weird behavior. This doesn't have as much to do with the ZFS v15 bits commited to -HEAD since I have the exact same behavior on a 8.0-RELEASE-p2 i386 box with ZFS v14. A friend of mine is running osol build 117 but he created his raidz pool on an even older build though. His raidz pool also uses 3 equally-sized drives (3x2Tb) and his raidz pool is showing: % zfs list -r pool2 NAMEUSED AVAIL REFER MOUNTPOINT pool2 3.32T 2.06T 3.18T /export/pool2 % df -h pool2 Filesystem size used avail capacity Mounted on pool2 5.4T 3.2T 2.1T61%/export/pool2 To run further tests he also created a test raidz pool using 3 65m files: % zfs list test2 NAMEUSED AVAIL REFER MOUNTPOINT test2 73.5K 149M21K /test2 So on osol build 117 the available space is what I'm expecting to see whereas on FreeBSD 9.0-CURRENT amd64 and 8.0-RELEASE-p2 i386 Is someone having the same issues? Cheers, marco pgpQwcquU4UgC.pgp Description: PGP signature
Re: [HEADSUP] ZFS version 15 committed to head
Hi, On 07/13/2010 04:02 PM, Martin Matuska wrote: For people interested in running this on 8.1 I will provide patches for releng/8.1 and stable/8 as soon as 8.1 gets released. Previously, I've run earlier versions (8) with sys/cddl taken from head. Is this a no-go with what we have currently in stable/8 and trunk? Thanks, ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"