Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread Roland Rambau

Eric,

in my understanding ( which I learned from more qualified people
but I may be mistaken anyway ), whenever we discuss a transfer rate
like  x Mb/s, y GB/s or z PB/d, the M, G, T or P refers to the
frequency and not to the data.

1 MB/s means  transfer bytes at 1 MHz, NOT transfer megabytes at 1Hz

therefor its 1'000'000 B/s  ( strictly speaking )


Of course usually some protocol overhead is much larger and so the small
1000:1024 difference is irrelevant anyway and can+will be neglected.

  -- Roland





Am 17.03.2010 04:45, schrieb Erik Trimble:

On 3/16/2010 4:23 PM, Roland Rambau wrote:

Eric,

careful:

Am 16.03.2010 23:45, schrieb Erik Trimble:


Up until 5 years ago (or so), GigaByte meant a power of 2 to EVERYONE,
not just us techies. I would hardly call 40+ years of using the various
giga/mega/kilo prefixes as a power of 2 in computer science as
non-authoritative.


How long does it take to transmit 1 TiB over a 1 GB/sec tranmission
link, assuming no overhead ?

See ?

hth

-- Roland



I guess folks have gotten lazy all over.

Actually, for networking, it's all GigaBIT, but I get your meaning.
Which is why it's all properly labeled 1Gb Ethernet, not 1GB ethernet.

That said, I'm still under the impression that Giga = 1024^3 for
networking, just like Mega = 1024^2. After all, it's 100Mbit Ethernet,
which doesn't mean it runs at 100Mhz.

That is, on Fast Ethernet, I should be sending a max 100 x 1024^2 BITS
per second.


Data amounts are (so far as I know universally) employing powers-of-2,
while frequencies are done in powers-of-10. Thus, baud (for modems) is
in powers-of-10, as are CPU/memory speeds. Memory (*RAM of all sorts),
bus THROUGHPUT (i.e. PCI-E is in powers-of-2), networking throughput,
and even graphics throughput is in powers-of-2.

If they want to use powers-of-10, then use the actual normal names,
like graphics performance ratings have done (i.e. 10 billion texels, not
10 Gigatexels. Take a look at Nvidia's product literature:

http://www.nvidia.com/object/IO_11761.html


It's just the storage vendors using the broken measurements. Bastards!





--


Roland Rambau Server and Solution Architects
Principal Field Technologist  Global Systems Engineering
Phone: +49-89-46008-2520  Mobile:+49-172-84 58 129
Fax:   +49-89-46008-  mailto:roland.ram...@sun.com

Sitz der Gesellschaft: Sun Microsystems GmbH,
Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Amtsgericht München: HRB 161028;
Geschäftsführer: Thomas Schröder
*** UNIX ** /bin/sh * FORTRAN **
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread Casper . Dik

Carson Gaspar wrote:
 Not quite. 
 11 x 10^12 =~ 10.004 x (1024^4).

 So, the 'zpool list' is right on, at 10T available.

 Duh, I was doing GiB math (y = x * 10^9 / 2^20), not TiB math (y = x * 
 10^12 / 2^40).

 Thanks for the correction.

You're welcome. :-)


On a not-completely-on-topic note:

Has there been a consideration by anyone to do a class-action lawsuit 
for false advertising on this?  I know they now have to include the 1GB 
= 1,000,000,000 bytes thing in their specs and somewhere on the box, 
but just because I say 1 L = 0.9 metric liters somewhere on the box, 
it shouldn't mean that I should be able to avertise in huge letters 2 L 
bottle of Coke on the outside of the package...

I think such attempts have been done and I think one was settled by 
Western Digital.

https://www.wdc.com/settlement/docs/document20.htm

This was in 2006.

I was apparently part of the 'class' as I had a disk registered; I think 
they gave some software.

See also:

http://en.wikipedia.org/wiki/Binary_prefix

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread David Dyer-Bennet

On 3/16/2010 23:21, Erik Trimble wrote:

On 3/16/2010 8:29 PM, David Dyer-Bennet wrote:

On 3/16/2010 17:45, Erik Trimble wrote:

David Dyer-Bennet wrote:

On Tue, March 16, 2010 14:59, Erik Trimble wrote:


Has there been a consideration by anyone to do a class-action lawsuit
for false advertising on this?  I know they now have to include 
the 1GB

= 1,000,000,000 bytes thing in their specs and somewhere on the box,
but just because I say 1 L = 0.9 metric liters somewhere on the 
box,
it shouldn't mean that I should be able to avertise in huge 
letters 2 L

bottle of Coke on the outside of the package...


I think giga is formally defined as a prefix meaning 10^9; that 
is, the
definition the disk manufacturers are using is the standard metric 
one and

very probably the one most people expect.  There are international
standards for these things.

I'm well aware of the history of power-of-two block and disk sizes in
computers (the first computers I worked with pre-dated that 
period); but I

think we need to recognize that this is our own weird local usage of
terminology, and that we can't expect the rest of the world to 
change to

our way of doing things.


That's RetConn-ing.  The only reason the stupid GiB / GB thing came 
around in the past couple of years is that the disk drive 
manufacturers pushed SI to do it.
Up until 5 years ago (or so), GigaByte meant a power of 2 to 
EVERYONE, not just us techies.   I would hardly call 40+ years of 
using the various giga/mega/kilo  prefixes as a power of 2 in 
computer science as non-authoritative.  In fact, I would argue that 
the HD manufacturers don't have a leg to stand on - it's not like 
they were outside the field and used to the standard SI notation 
of powers of 10.  Nope. They're inside the industry, used the 
powers-of-2 for decades, then suddenly decided to modify that 
meaning, as it served their marketing purposes.


The SI meaning was first proposed in the 1920s, so far as I can 
tell.  Our entire history of special usage took place while the SI 
definition was in place.  We simply mis-used it.  There was at the 
time no prefix for what we actually wanted (not giga then, but mega), 
so we borrowed and repurposed mega.


Doesn't matter whether the original meaning of K/M/G was a 
power-of-10.  What matters is internal usage in the industry.  And 
that has been consistent with powers-of-2 for 40+ years.  There has 
been NO outside understanding that GB = 1 billion bytes until the 
Storage Industry decided it wanted it that way.  That's pretty much 
the definition of distorted advertising.


That's simply not true.  The first computer I programmed, an IBM 1620, 
was routinely referred to as having 20K of core.  That meant 20,000 
decimal digits; not 20,480.  The other two memory configurations were 
similarly 40K for 40,000 and 60K for 60,000.  The first computer I 
was *paid* for programming, the 1401, had 8K of core, and that was 
8,000 locations, not 8,192.  This was right on 40 years ago (fall of 
1969 when I started working on the 1401).  Yes, neither was brand new, 
but IBM was still leasing them to customers (it came in configurations 
of 4k, 8k, 12k, and I think 16k; been a while!).

--

David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread Giovanni Tirloni
On Wed, Mar 17, 2010 at 9:34 AM, David Dyer-Bennet d...@dd-b.net wrote:

 On 3/16/2010 23:21, Erik Trimble wrote:

 On 3/16/2010 8:29 PM, David Dyer-Bennet wrote:

 On 3/16/2010 17:45, Erik Trimble wrote:

 David Dyer-Bennet wrote:

 On Tue, March 16, 2010 14:59, Erik Trimble wrote:

  Has there been a consideration by anyone to do a class-action lawsuit
 for false advertising on this?  I know they now have to include the
 1GB
 = 1,000,000,000 bytes thing in their specs and somewhere on the box,
 but just because I say 1 L = 0.9 metric liters somewhere on the box,
 it shouldn't mean that I should be able to avertise in huge letters 2
 L
 bottle of Coke on the outside of the package...


 I think giga is formally defined as a prefix meaning 10^9; that is,
 the
 definition the disk manufacturers are using is the standard metric one
 and
 very probably the one most people expect.  There are international
 standards for these things.

 I'm well aware of the history of power-of-two block and disk sizes in
 computers (the first computers I worked with pre-dated that period);
 but I
 think we need to recognize that this is our own weird local usage of
 terminology, and that we can't expect the rest of the world to change
 to
 our way of doing things.


 That's RetConn-ing.  The only reason the stupid GiB / GB thing came
 around in the past couple of years is that the disk drive manufacturers
 pushed SI to do it.
 Up until 5 years ago (or so), GigaByte meant a power of 2 to EVERYONE,
 not just us techies.   I would hardly call 40+ years of using the various
 giga/mega/kilo  prefixes as a power of 2 in computer science as
 non-authoritative.  In fact, I would argue that the HD manufacturers don't
 have a leg to stand on - it's not like they were outside the field and
 used to the standard SI notation of powers of 10.  Nope. They're inside
 the industry, used the powers-of-2 for decades, then suddenly decided to
 modify that meaning, as it served their marketing purposes.


 The SI meaning was first proposed in the 1920s, so far as I can tell.
  Our entire history of special usage took place while the SI definition was
 in place.  We simply mis-used it.  There was at the time no prefix for what
 we actually wanted (not giga then, but mega), so we borrowed and repurposed
 mega.

  Doesn't matter whether the original meaning of K/M/G was a
 power-of-10.  What matters is internal usage in the industry.  And that has
 been consistent with powers-of-2 for 40+ years.  There has been NO outside
 understanding that GB = 1 billion bytes until the Storage Industry decided
 it wanted it that way.  That's pretty much the definition of distorted
 advertising.


 That's simply not true.  The first computer I programmed, an IBM 1620, was
 routinely referred to as having 20K of core.  That meant 20,000 decimal
 digits; not 20,480.  The other two memory configurations were similarly
 40K for 40,000 and 60K for 60,000.  The first computer I was *paid* for
 programming, the 1401, had 8K of core, and that was 8,000 locations, not
 8,192.  This was right on 40 years ago (fall of 1969 when I started working
 on the 1401).  Yes, neither was brand new, but IBM was still leasing them to
 customers (it came in configurations of 4k, 8k, 12k, and I think 16k; been a
 while!).


At this point in history it doesn't matter much who's right or wrong
anymore.

IMHO, what matters is that pretty much everything from the disk controller
to the CPU and network interface is advertised in power-of-2 terms and disks
sit alone using power-of-10. And students are taught that computers work
with bits and so everything is a power of 2.

Just last week I had to remind people that a 24-disk JBOD with 1TB disks
wouldn't provide 24TB of storage since disks show up as 931GB.

It *is* an anomaly and I don't expect it to be fixed.

Perhaps some disk vendor could add more bits to its drives and advertise a
real 1TB disk using power-of-2 and show how people are being misled by
other vendors that use power-of-10. Highly unlikely but would sure get some
respect from the storage community.

-- 
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread Casper . Dik


IMHO, what matters is that pretty much everything from the disk controller
to the CPU and network interface is advertised in power-of-2 terms and disks
sit alone using power-of-10. And students are taught that computers work
with bits and so everything is a power of 2.

That is simply not true:

Memory: power of 2(bytes)
Network: power of 10  (bits/s))
Disk: power of 10 (bytes)
CPU Frequency: power of 10 (cycles/s)
SD/Flash/..: power of 10 (bytes)
Bus speed: power of 10

Main memory is the odd one out.

Just last week I had to remind people that a 24-disk JBOD with 1TB disks
wouldn't provide 24TB of storage since disks show up as 931GB.

Well some will say it's 24T :-)

It *is* an anomaly and I don't expect it to be fixed.

Perhaps some disk vendor could add more bits to its drives and advertise a
real 1TB disk using power-of-2 and show how people are being misled by
other vendors that use power-of-10. Highly unlikely but would sure get some
respect from the storage community.

You've not been misled unless you have your had in the sand for the last
five to ten years.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread Edho P Arief
On Wed, Mar 17, 2010 at 9:09 PM, Giovanni Tirloni gtirl...@sysdroid.com wrote:
 IMHO, what matters is that pretty much everything from the disk controller
 to the CPU and network interface is advertised in power-of-2 terms and disks
 sit alone using power-of-10. And students are taught that computers work
 with bits and so everything is a power of 2.


Apparently someone wrote false information on Wikipedia [1].

[1] http://en.wikipedia.org/wiki/Data_rate_units#Examples

-- 
O ascii ribbon campaign - stop html mail - www.asciiribbon.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread Giovanni Tirloni
On Wed, Mar 17, 2010 at 11:23 AM, casper@sun.com wrote:



 IMHO, what matters is that pretty much everything from the disk controller
 to the CPU and network interface is advertised in power-of-2 terms and
 disks
 sit alone using power-of-10. And students are taught that computers work
 with bits and so everything is a power of 2.

 That is simply not true:

Memory: power of 2(bytes)
Network: power of 10  (bits/s))
Disk: power of 10 (bytes)
CPU Frequency: power of 10 (cycles/s)
SD/Flash/..: power of 10 (bytes)
Bus speed: power of 10

 Main memory is the odd one out.


My bad on generalizing that information.

Perhaps the software stack dealing with disks should be changed to use
power-of-10. Unlikely too.

-- 
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Stefan Walk


On 15 Mar 2010, at 23:03, Tonmaus wrote:


Hi Cindy,
trying to reproduce this


For a RAIDZ pool, the zpool list command identifies
the inflated space
for the storage pool, which is the physical available
space without an
accounting for redundancy overhead.

The zfs list command identifies how much actual pool
space is available
to the file systems.


I am lacking 1 TB on my pool:

u...@filemeister:~$ zpool list daten
NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
daten10T  3,71T  6,29T37%  1.00x  ONLINE  -
u...@filemeister:~$ zpool status daten
 pool: daten
state: ONLINE
scrub: none requested
config:

   NAME  STATE READ WRITE CKSUM
   daten ONLINE   0 0 0
 raidz2-0ONLINE   0 0 0
   c10t2d0   ONLINE   0 0 0
   c10t3d0   ONLINE   0 0 0
   c10t4d0   ONLINE   0 0 0
   c10t5d0   ONLINE   0 0 0
   c10t6d0   ONLINE   0 0 0
   c10t7d0   ONLINE   0 0 0
   c10t8d0   ONLINE   0 0 0
   c10t9d0   ONLINE   0 0 0
   c11t18d0  ONLINE   0 0 0
   c11t19d0  ONLINE   0 0 0
   c11t20d0  ONLINE   0 0 0
   spares
 c11t21d0AVAIL

errors: No known data errors
u...@filemeister:~$ zfs list daten
NAMEUSED  AVAIL  REFER  MOUNTPOINT
daten  3,01T  4,98T   110M  /daten

I am counting 11 disks 1 TB each in a raidz2 pool. This is 11 TB  
gross capacity, and 9 TB net. Zpool is however stating 10 TB and zfs  
is stating 8TB. The difference between net and gross is correct, but  
where is the capacity from the 11th disk going?


Regards,

Tonmaus


This is because 1TB is not 1TB. The 1TB on your disk label means 10^12  
bytes, while 1TB in the OS means (2^10)^4 = 1024^4 bytes.

11*(1000**4)/(1024.0**4)
= 10.0044417195022
So your 11 disk label TB are 10 OS TB. Fun, isn't it?

Best regards,
Stefan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Erik Trimble

Carson Gaspar wrote:
Not quite. 
11 x 10^12 =~ 10.004 x (1024^4).


So, the 'zpool list' is right on, at 10T available.


Duh, I was doing GiB math (y = x * 10^9 / 2^20), not TiB math (y = x * 
10^12 / 2^40).


Thanks for the correction.


You're welcome. :-)


On a not-completely-on-topic note:

Has there been a consideration by anyone to do a class-action lawsuit 
for false advertising on this?  I know they now have to include the 1GB 
= 1,000,000,000 bytes thing in their specs and somewhere on the box, 
but just because I say 1 L = 0.9 metric liters somewhere on the box, 
it shouldn't mean that I should be able to avertise in huge letters 2 L 
bottle of Coke on the outside of the package...





--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread David Dyer-Bennet

On Tue, March 16, 2010 14:59, Erik Trimble wrote:


 Has there been a consideration by anyone to do a class-action lawsuit
 for false advertising on this?  I know they now have to include the 1GB
 = 1,000,000,000 bytes thing in their specs and somewhere on the box,
 but just because I say 1 L = 0.9 metric liters somewhere on the box,
 it shouldn't mean that I should be able to avertise in huge letters 2 L
 bottle of Coke on the outside of the package...

I think giga is formally defined as a prefix meaning 10^9; that is, the
definition the disk manufacturers are using is the standard metric one and
very probably the one most people expect.  There are international
standards for these things.

I'm well aware of the history of power-of-two block and disk sizes in
computers (the first computers I worked with pre-dated that period); but I
think we need to recognize that this is our own weird local usage of
terminology, and that we can't expect the rest of the world to change to
our way of doing things.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Tonmaus
 Has there been a consideration by anyone to do a
 class-action lawsuit 
 for false advertising on this?  I know they now have
 to include the 1GB 
 = 1,000,000,000 bytes thing in their specs and
 somewhere on the box, 
 but just because I say 1 L = 0.9 metric liters
 somewhere on the box, 
 it shouldn't mean that I should be able to avertise
 in huge letters 2 L 
 bottle of Coke on the outside of the package...

If I am not completely mistaken, 1^n/1,024^n is converging against 0 for n vs 
infinite. That is certainly an unwarranted facilitation of Kryder's law for 
very large storage devices.

Regards,

Tonmaus
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Erik Trimble

David Dyer-Bennet wrote:

On Tue, March 16, 2010 14:59, Erik Trimble wrote:

  

Has there been a consideration by anyone to do a class-action lawsuit
for false advertising on this?  I know they now have to include the 1GB
= 1,000,000,000 bytes thing in their specs and somewhere on the box,
but just because I say 1 L = 0.9 metric liters somewhere on the box,
it shouldn't mean that I should be able to avertise in huge letters 2 L
bottle of Coke on the outside of the package...



I think giga is formally defined as a prefix meaning 10^9; that is, the
definition the disk manufacturers are using is the standard metric one and
very probably the one most people expect.  There are international
standards for these things.

I'm well aware of the history of power-of-two block and disk sizes in
computers (the first computers I worked with pre-dated that period); but I
think we need to recognize that this is our own weird local usage of
terminology, and that we can't expect the rest of the world to change to
our way of doing things.
  


That's RetConn-ing.  The only reason the stupid GiB / GB thing came 
around in the past couple of years is that the disk drive manufacturers 
pushed SI to do it. 

Up until 5 years ago (or so), GigaByte meant a power of 2 to EVERYONE, 
not just us techies.   I would hardly call 40+ years of using the 
various giga/mega/kilo  prefixes as a power of 2 in computer science as 
non-authoritative.  In fact, I would argue that the HD manufacturers 
don't have a leg to stand on - it's not like they were outside the 
field and used to the standard SI notation of powers of 10.  Nope. 
They're inside the industry, used the powers-of-2 for decades, then 
suddenly decided to modify that meaning, as it served their marketing 
purposes.


Note that NOBODY else in the computer industry does this in their 
marketing materials - if it's such a standard, why on earth don't the 
DRAM chip makers support (and market) it that way?   The various Mhz/Ghz 
notations are powers-of-10, but they've always been that way, and more 
importantly, are defined by the OSes and other software as being that 
way.  HD capacities are an anomaly,  and it's purely marketing smooze. 


They should get smacked hard again on this.

It would be one thing if it was never seen (or only by super-nerds like 
us), but for the average consumer, when they buy that nice shiny Dell 
with an advertised 1TB disk, then boot to Windows 7, why does Windows 
then say that their C drive is only 900GB in size?   How is that /not/ 
deceptive marketing?



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Erik Trimble

Tonmaus wrote:

Has there been a consideration by anyone to do a
class-action lawsuit 
for false advertising on this?  I know they now have
to include the 1GB 
= 1,000,000,000 bytes thing in their specs and
somewhere on the box, 
but just because I say 1 L = 0.9 metric liters
somewhere on the box, 
it shouldn't mean that I should be able to avertise
in huge letters 2 L 
bottle of Coke on the outside of the package...



If I am not completely mistaken, 1^n/1,024^n is converging against 0 for n vs 
infinite. That is certainly an unwarranted facilitation of Kryder's law for 
very large storage devices.

Regards,

Tonmaus
  
well, that's true, even if it is Limit n-infinity for [1000^n  / 
1024^n]it's still 0.  :-)


But seriously, you lose 2.3% per prefix.  Right now, we're up to almost 
a 10% difference at TB.  In 10 years for Petabyte, we're at over 11% 
loss.  In 20 years, when Exabyte drives (or whatever storage is on) are 
common, that's almost 15% loss.


Frankly, I'm starting to see an analog with Nautical Miles vs Statue Miles.

--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Erik Trimble

Erik Trimble wrote:

Tonmaus wrote:

Has there been a consideration by anyone to do a
class-action lawsuit for false advertising on this?  I know they now 
have

to include the 1GB = 1,000,000,000 bytes thing in their specs and
somewhere on the box, but just because I say 1 L = 0.9 metric liters
somewhere on the box, it shouldn't mean that I should be able to 
avertise

in huge letters 2 L bottle of Coke on the outside of the package...



If I am not completely mistaken, 1^n/1,024^n is converging against 0 
for n vs infinite. That is certainly an unwarranted facilitation of 
Kryder's law for very large storage devices.


Regards,

Tonmaus
  
well, that's true, even if it is Limit n-infinity for [1000^n  / 
1024^n]it's still 0.  :-)



Actually, my old Calculus teacher would be disappointed in me.

It's  Lim n-infinity ( 1000^n / (2^10)^n)

or:
  Lim n-infinity (1000^n / 2^10n)


As Tonmaus pointed out, it all still trends to 0. 



Now that little bit of pedantic anal calculus-izing is over, back to our 
regularly schedule madness.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Edho P Arief
On Wed, Mar 17, 2010 at 5:45 AM, Erik Trimble erik.trim...@sun.com wrote:
 Up until 5 years ago (or so), GigaByte meant a power of 2 to EVERYONE, not
 just us techies.   I would hardly call 40+ years of using the various
 giga/mega/kilo  prefixes as a power of 2 in computer science as
 non-authoritative.  In fact, I would argue that the HD manufacturers don't
 have a leg to stand on - it's not like they were outside the field and
 used to the standard SI notation of powers of 10.  Nope. They're inside
 the industry, used the powers-of-2 for decades, then suddenly decided to
 modify that meaning, as it served their marketing purposes.


it's probably just me, but I always raged when calculating anything
using imperial units, * binary bytes and time.

-- 
O ascii ribbon campaign - stop html mail - www.asciiribbon.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Tonmaus
The reason why there is not more uproar is that cost per data unit is dwindling 
while the gap resulting from this marketing trick is increasing. I remember a 
case a German broadcaster filed against a system integrator in the age of the 4 
GB SCSI drive. This was in the mid-90s.

Regards,

Tonmaus
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Roland Rambau

Eric,

careful:

Am 16.03.2010 23:45, schrieb Erik Trimble:


Up until 5 years ago (or so), GigaByte meant a power of 2 to EVERYONE,
not just us techies. I would hardly call 40+ years of using the various
giga/mega/kilo prefixes as a power of 2 in computer science as
non-authoritative.


How long does it take to transmit 1 TiB over a 1 GB/sec tranmission
link, assuming no overhead ?

See ?

  hth

  -- Roland



--


Roland Rambau Server and Solution Architects
Principal Field Technologist  Global Systems Engineering
Phone: +49-89-46008-2520  Mobile:+49-172-84 58 129
Fax:   +49-89-46008-  mailto:roland.ram...@sun.com

Sitz der Gesellschaft: Sun Microsystems GmbH,
Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Amtsgericht München: HRB 161028;
Geschäftsführer: Thomas Schröder
*** UNIX ** /bin/sh * FORTRAN **
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread David Dyer-Bennet

On 3/16/2010 17:45, Erik Trimble wrote:

David Dyer-Bennet wrote:

On Tue, March 16, 2010 14:59, Erik Trimble wrote:


Has there been a consideration by anyone to do a class-action lawsuit
for false advertising on this?  I know they now have to include the 
1GB

= 1,000,000,000 bytes thing in their specs and somewhere on the box,
but just because I say 1 L = 0.9 metric liters somewhere on the box,
it shouldn't mean that I should be able to avertise in huge letters 
2 L

bottle of Coke on the outside of the package...


I think giga is formally defined as a prefix meaning 10^9; that is, 
the
definition the disk manufacturers are using is the standard metric 
one and

very probably the one most people expect.  There are international
standards for these things.

I'm well aware of the history of power-of-two block and disk sizes in
computers (the first computers I worked with pre-dated that period); 
but I

think we need to recognize that this is our own weird local usage of
terminology, and that we can't expect the rest of the world to change to
our way of doing things.


That's RetConn-ing.  The only reason the stupid GiB / GB thing came 
around in the past couple of years is that the disk drive 
manufacturers pushed SI to do it.
Up until 5 years ago (or so), GigaByte meant a power of 2 to EVERYONE, 
not just us techies.   I would hardly call 40+ years of using the 
various giga/mega/kilo  prefixes as a power of 2 in computer science 
as non-authoritative.  In fact, I would argue that the HD 
manufacturers don't have a leg to stand on - it's not like they were 
outside the field and used to the standard SI notation of powers 
of 10.  Nope. They're inside the industry, used the powers-of-2 for 
decades, then suddenly decided to modify that meaning, as it served 
their marketing purposes.


The SI meaning was first proposed in the 1920s, so far as I can tell.  
Our entire history of special usage took place while the SI definition 
was in place.  We simply mis-used it.  There was at the time no prefix 
for what we actually wanted (not giga then, but mega), so we borrowed 
and repurposed mega.


I know what you mean about the disk manufacturers changing.  And I'm 
sure they did it because it made their disks sound bigger for free, and 
that's clearly a marketing choice, and yes, it creates the problem that 
when the software reports the size it often doesn't agree with the 
manufacturer size.  I just can't get up a good head of steam about this 
when they're using the prefix correctly and we're not, though.


--

David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Erik Trimble

On 3/16/2010 4:23 PM, Roland Rambau wrote:

Eric,

careful:

Am 16.03.2010 23:45, schrieb Erik Trimble:


Up until 5 years ago (or so), GigaByte meant a power of 2 to EVERYONE,
not just us techies. I would hardly call 40+ years of using the various
giga/mega/kilo prefixes as a power of 2 in computer science as
non-authoritative.


How long does it take to transmit 1 TiB over a 1 GB/sec tranmission
link, assuming no overhead ?

See ?

  hth

  -- Roland



I guess folks have gotten lazy all over.

Actually, for networking, it's all GigaBIT, but I get your meaning. 
Which is why it's all properly labeled 1Gb Ethernet, not 1GB ethernet.


That said, I'm still under the impression that Giga = 1024^3 for 
networking, just like Mega = 1024^2.  After all, it's 100Mbit Ethernet, 
which doesn't mean it runs at 100Mhz.


That is, on Fast Ethernet, I should be sending a max 100 x 1024^2 BITS 
per second.



Data amounts are (so far as I know universally) employing powers-of-2, 
while frequencies are done in powers-of-10. Thus, baud (for modems) is 
in powers-of-10, as are CPU/memory speeds.  Memory (*RAM of all sorts), 
bus THROUGHPUT (i.e. PCI-E is in powers-of-2), networking throughput, 
and even graphics throughput is in powers-of-2.


If they want to use powers-of-10, then use the actual normal names, 
like graphics performance ratings have done (i.e. 10 billion texels, not 
10 Gigatexels.   Take a look at Nvidia's product literature:


http://www.nvidia.com/object/IO_11761.html


It's just the storage vendors using the broken measurements. Bastards!



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Erik Trimble

On 3/16/2010 8:29 PM, David Dyer-Bennet wrote:

On 3/16/2010 17:45, Erik Trimble wrote:

David Dyer-Bennet wrote:

On Tue, March 16, 2010 14:59, Erik Trimble wrote:


Has there been a consideration by anyone to do a class-action lawsuit
for false advertising on this?  I know they now have to include the 
1GB

= 1,000,000,000 bytes thing in their specs and somewhere on the box,
but just because I say 1 L = 0.9 metric liters somewhere on the box,
it shouldn't mean that I should be able to avertise in huge letters 
2 L

bottle of Coke on the outside of the package...


I think giga is formally defined as a prefix meaning 10^9; that 
is, the
definition the disk manufacturers are using is the standard metric 
one and

very probably the one most people expect.  There are international
standards for these things.

I'm well aware of the history of power-of-two block and disk sizes in
computers (the first computers I worked with pre-dated that period); 
but I

think we need to recognize that this is our own weird local usage of
terminology, and that we can't expect the rest of the world to 
change to

our way of doing things.


That's RetConn-ing.  The only reason the stupid GiB / GB thing came 
around in the past couple of years is that the disk drive 
manufacturers pushed SI to do it.
Up until 5 years ago (or so), GigaByte meant a power of 2 to 
EVERYONE, not just us techies.   I would hardly call 40+ years of 
using the various giga/mega/kilo  prefixes as a power of 2 in 
computer science as non-authoritative.  In fact, I would argue that 
the HD manufacturers don't have a leg to stand on - it's not like 
they were outside the field and used to the standard SI notation 
of powers of 10.  Nope. They're inside the industry, used the 
powers-of-2 for decades, then suddenly decided to modify that 
meaning, as it served their marketing purposes.


The SI meaning was first proposed in the 1920s, so far as I can tell.  
Our entire history of special usage took place while the SI definition 
was in place.  We simply mis-used it.  There was at the time no prefix 
for what we actually wanted (not giga then, but mega), so we borrowed 
and repurposed mega.


Doesn't matter whether the original meaning of K/M/G was a 
power-of-10.  What matters is internal usage in the industry.  And that 
has been consistent with powers-of-2 for 40+ years.  There has been NO 
outside understanding that GB = 1 billion bytes until the Storage 
Industry decided it wanted it that way.  That's pretty much the 
definition of distorted advertising.


The issue here is getting what you paid for.  Changing the meaning of a 
well-understood term to be something that NO ONE else has used in that 
context is pretty much the definition of false advertising.


Put it another way:  for all those folks in the UK, how would you like 
to buy a Hundredweight (cwt) of something, but only get 100 lbs actually 
delivered?  The UK (Imperial) cwt = 112 lbs, while the US cwt = 100 
lbs.   Having some fine print on the package that said cwt=100lbs isn't 
going to fly with the British Advertising Board.  So why should we allow 
the fine print of 1
 GB = 1 billion bytes?  It's the same redefinition of a common term to 
confuse and distort.



I know what you mean about the disk manufacturers changing.  And I'm 
sure they did it because it made their disks sound bigger for free, 
and that's clearly a marketing choice, and yes, it creates the problem 
that when the software reports the size it often doesn't agree with 
the manufacturer size.  I just can't get up a good head of steam about 
this when they're using the prefix correctly and we're not, though.


Problem is, they're NOT using it correctly.  Language is domain-specific 
- that is, terms have context.  A word can mean completely different 
things in different contexts, and it's not correct to say X = true 
meaning for a given word.  In this case, historical usage PLUS /actual/ 
implementation usage indicates that K/M/G/T are powers of 2.  In our 
context of computing, they've meant powers-of-2.  It's also disingenuous 
for them to argue that consumers (i.e. non-technical people) didn't 
understand the usage of powers-of-2.  To effectively argue that, they've 
have to have made the switch around the time that mass-consumer 
usage/retailing of computing was happening, which was (at best) 1990.  
Oops. 15 years later it isn't rational to argue that consumers don't 
understand the technical usage of the term.


Bottom line is that it's a advertising scam. Promising one thing, and 
delivering another. That's what the truth-in-advertising laws are for.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Michael Hassey
Sorry if this is too basic -

So I have a single zpool in addition to the rpool, called xpool.

NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool   136G   109G  27.5G79%  ONLINE  -
xpool   408G   171G   237G42%  ONLINE  -

I have 408 in the pool, am using 171 leaving me 237 GB. 

The pool is built up as;

  pool: xpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
xpool   ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c8t1d0  ONLINE   0 0 0
c8t2d0  ONLINE   0 0 0
c8t3d0  ONLINE   0 0 0

errors: No known data errors


But - and here is the question -

Creating file systems on it, and the file systems in play report only 76GB of 
space free

SNIP FROM ZFS LIST

xpool/zones/logserver/ROOT/zbe 975M  76.4G   975M  legacy
xpool/zones/openxsrvr 2.22G  76.4G  21.9K  /export/zones/openxsrvr
xpool/zones/openxsrvr/ROOT2.22G  76.4G  18.9K  legacy
xpool/zones/openxsrvr/ROOT/zbe2.22G  76.4G  2.22G  legacy
xpool/zones/puggles241M  76.4G  21.9K  /export/zones/puggles
xpool/zones/puggles/ROOT   241M  76.4G  18.9K  legacy
xpool/zones/puggles/ROOT/zbe   241M  76.4G   241M  legacy
xpool/zones/reposerver 299M  76.4G  21.9K  /export/zones/reposerver


So my question is, where is the space from xpool being used? or is it?


Thanks for reading.

Mike.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Cindy Swearingen

Hi Michael,

For a RAIDZ pool, the zpool list command identifies the inflated space
for the storage pool, which is the physical available space without an
accounting for redundancy overhead.

The zfs list command identifies how much actual pool space is available
to the file systems.

See the example of a RAIDZ-2 pool created below with 3 44 GB disks.
The total pool capacity reported by zpool list is 134 GB. The amount of
pool space that is available to the file systems is 43.8 GB due to
RAIDZ-2 redundancy overhead.

See this FAQ section for more information.

http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq#HZFSAdministrationQuestions

Why doesn't the space that is reported by the zpool list command and the 
zfs list command match?


Although this site is dog-slow for me today...

Thanks,

Cindy

# zpool create xpool raidz2 c3t40d0 c3t40d1 c3t40d2
# zpool list xpool
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
xpool   134G   234K   134G 0%  ONLINE  -
# zfs list xpool
NAMEUSED  AVAIL  REFER  MOUNTPOINT
xpool  73.2K  43.8G  20.9K  /xpool


On 03/15/10 08:38, Michael Hassey wrote:

Sorry if this is too basic -

So I have a single zpool in addition to the rpool, called xpool.

NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool   136G   109G  27.5G79%  ONLINE  -
xpool   408G   171G   237G42%  ONLINE  -

I have 408 in the pool, am using 171 leaving me 237 GB. 


The pool is built up as;

  pool: xpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
xpool   ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c8t1d0  ONLINE   0 0 0
c8t2d0  ONLINE   0 0 0
c8t3d0  ONLINE   0 0 0

errors: No known data errors


But - and here is the question -

Creating file systems on it, and the file systems in play report only 76GB of 
space free

SNIP FROM ZFS LIST

xpool/zones/logserver/ROOT/zbe 975M  76.4G   975M  legacy
xpool/zones/openxsrvr 2.22G  76.4G  21.9K  /export/zones/openxsrvr
xpool/zones/openxsrvr/ROOT2.22G  76.4G  18.9K  legacy
xpool/zones/openxsrvr/ROOT/zbe2.22G  76.4G  2.22G  legacy
xpool/zones/puggles241M  76.4G  21.9K  /export/zones/puggles
xpool/zones/puggles/ROOT   241M  76.4G  18.9K  legacy
xpool/zones/puggles/ROOT/zbe   241M  76.4G   241M  legacy
xpool/zones/reposerver 299M  76.4G  21.9K  /export/zones/reposerver


So my question is, where is the space from xpool being used? or is it?


Thanks for reading.

Mike.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Tonmaus
Hi Cindy,
trying to reproduce this 

 For a RAIDZ pool, the zpool list command identifies
 the inflated space
 for the storage pool, which is the physical available
 space without an
 accounting for redundancy overhead.
 
 The zfs list command identifies how much actual pool
 space is available
 to the file systems.

I am lacking 1 TB on my pool:

u...@filemeister:~$ zpool list daten
NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
daten10T  3,71T  6,29T37%  1.00x  ONLINE  -
u...@filemeister:~$ zpool status daten
  pool: daten
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
daten ONLINE   0 0 0
  raidz2-0ONLINE   0 0 0
c10t2d0   ONLINE   0 0 0
c10t3d0   ONLINE   0 0 0
c10t4d0   ONLINE   0 0 0
c10t5d0   ONLINE   0 0 0
c10t6d0   ONLINE   0 0 0
c10t7d0   ONLINE   0 0 0
c10t8d0   ONLINE   0 0 0
c10t9d0   ONLINE   0 0 0
c11t18d0  ONLINE   0 0 0
c11t19d0  ONLINE   0 0 0
c11t20d0  ONLINE   0 0 0
spares
  c11t21d0AVAIL

errors: No known data errors
u...@filemeister:~$ zfs list daten
NAMEUSED  AVAIL  REFER  MOUNTPOINT
daten  3,01T  4,98T   110M  /daten

I am counting 11 disks 1 TB each in a raidz2 pool. This is 11 TB gross 
capacity, and 9 TB net. Zpool is however stating 10 TB and zfs is stating 8TB. 
The difference between net and gross is correct, but where is the capacity from 
the 11th disk going?

Regards,

Tonmaus
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Carson Gaspar

Tonmaus wrote:


I am lacking 1 TB on my pool:

u...@filemeister:~$ zpool list daten NAMESIZE  ALLOC   FREE
CAP  DEDUP  HEALTH  ALTROOT daten10T  3,71T  6,29T37%  1.00x
ONLINE  - u...@filemeister:~$ zpool status daten pool: daten state:
ONLINE scrub: none requested config:

NAME  STATE READ WRITE CKSUM daten ONLINE   0
0 0 raidz2-0ONLINE   0 0 0 c10t2d0   ONLINE
0 0 0 c10t3d0   ONLINE   0 0 0 c10t4d0   ONLINE
0 0 0 c10t5d0   ONLINE   0 0 0 c10t6d0   ONLINE
0 0 0 c10t7d0   ONLINE   0 0 0 c10t8d0   ONLINE
0 0 0 c10t9d0   ONLINE   0 0 0 c11t18d0  ONLINE
0 0 0 c11t19d0  ONLINE   0 0 0 c11t20d0  ONLINE
0 0 0 spares c11t21d0AVAIL

errors: No known data errors u...@filemeister:~$ zfs list daten NAME
USED  AVAIL  REFER  MOUNTPOINT daten  3,01T  4,98T   110M  /daten

I am counting 11 disks 1 TB each in a raidz2 pool. This is 11 TB
gross capacity, and 9 TB net. Zpool is however stating 10 TB and zfs
is stating 8TB. The difference between net and gross is correct, but
where is the capacity from the 11th disk going?


My guess is unit conversion and rounding. Your pool has 11 base 10 TB, 
which is 10.2445 base 2 TiB.


Likewise your fs has 9 base 10 TB, which is 8.3819 base 2 TiB.

--
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Erik Trimble
On Mon, 2010-03-15 at 15:03 -0700, Tonmaus wrote:
 Hi Cindy,
 trying to reproduce this 
 
  For a RAIDZ pool, the zpool list command identifies
  the inflated space
  for the storage pool, which is the physical available
  space without an
  accounting for redundancy overhead.
  
  The zfs list command identifies how much actual pool
  space is available
  to the file systems.
 
 I am lacking 1 TB on my pool:
 
 u...@filemeister:~$ zpool list daten
 NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
 daten10T  3,71T  6,29T37%  1.00x  ONLINE  -
 u...@filemeister:~$ zpool status daten
   pool: daten
  state: ONLINE
  scrub: none requested
 config:
 
 NAME  STATE READ WRITE CKSUM
 daten ONLINE   0 0 0
   raidz2-0ONLINE   0 0 0
 c10t2d0   ONLINE   0 0 0
 c10t3d0   ONLINE   0 0 0
 c10t4d0   ONLINE   0 0 0
 c10t5d0   ONLINE   0 0 0
 c10t6d0   ONLINE   0 0 0
 c10t7d0   ONLINE   0 0 0
 c10t8d0   ONLINE   0 0 0
 c10t9d0   ONLINE   0 0 0
 c11t18d0  ONLINE   0 0 0
 c11t19d0  ONLINE   0 0 0
 c11t20d0  ONLINE   0 0 0
 spares
   c11t21d0AVAIL
 
 errors: No known data errors
 u...@filemeister:~$ zfs list daten
 NAMEUSED  AVAIL  REFER  MOUNTPOINT
 daten  3,01T  4,98T   110M  /daten
 
 I am counting 11 disks 1 TB each in a raidz2 pool. This is 11 TB gross 
 capacity, and 9 TB net. Zpool is however stating 10 TB and zfs is stating 
 8TB. The difference between net and gross is correct, but where is the 
 capacity from the 11th disk going?
 
 Regards,
 
 Tonmaus

1TB disks aren't a terabyte.

Remember, the storage industry uses powers of 10, not 2.  it's
annoying.  

For each GB, you lose 7% in actual space computation. For each TB, it's
about 9%. So, your 1TB of  is actually about 931 GB. 

'zfs list' is going to report in actual powers-of-2, just like df. 


In my case, I have a 12 x 1TB configuration, and zfs list shows:


# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
array2540  10.9T  5.46T  5.41T50%  ONLINE  -

Likewise:

# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
array2540  4.53T  4.34T  80.4M  /data


So, here's the math:

1 storage TB = 1e12 / (1024^3) = 931 actual GB

931 GB x 12 = 11,172 GB
but, 1TB = 1024 GB
so:  931 GB x 12 / (1024) = 10.9TB.


Quick Math: 1 TB of advertised space = 0.91 TB of real space
1 GB of advertised space = 0.93 GB of real space





-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Erik Trimble
On Mon, 2010-03-15 at 15:40 -0700, Carson Gaspar wrote:
 Tonmaus wrote:
 
  I am lacking 1 TB on my pool:
  
  u...@filemeister:~$ zpool list daten NAMESIZE  ALLOC   FREE
  CAP  DEDUP  HEALTH  ALTROOT daten10T  3,71T  6,29T37%  1.00x
  ONLINE  - u...@filemeister:~$ zpool status daten pool: daten state:
  ONLINE scrub: none requested config:
  
  NAME  STATE READ WRITE CKSUM daten ONLINE   0
  0 0 raidz2-0ONLINE   0 0 0 c10t2d0   ONLINE
  0 0 0 c10t3d0   ONLINE   0 0 0 c10t4d0   ONLINE
  0 0 0 c10t5d0   ONLINE   0 0 0 c10t6d0   ONLINE
  0 0 0 c10t7d0   ONLINE   0 0 0 c10t8d0   ONLINE
  0 0 0 c10t9d0   ONLINE   0 0 0 c11t18d0  ONLINE
  0 0 0 c11t19d0  ONLINE   0 0 0 c11t20d0  ONLINE
  0 0 0 spares c11t21d0AVAIL
  
  errors: No known data errors u...@filemeister:~$ zfs list daten NAME
  USED  AVAIL  REFER  MOUNTPOINT daten  3,01T  4,98T   110M  /daten
  
  I am counting 11 disks 1 TB each in a raidz2 pool. This is 11 TB
  gross capacity, and 9 TB net. Zpool is however stating 10 TB and zfs
  is stating 8TB. The difference between net and gross is correct, but
  where is the capacity from the 11th disk going?
 
 My guess is unit conversion and rounding. Your pool has 11 base 10 TB, 
 which is 10.2445 base 2 TiB.
 
 Likewise your fs has 9 base 10 TB, which is 8.3819 base 2 TiB.

Not quite.  

11 x 10^12 =~ 10.004 x (1024^4).

So, the 'zpool list' is right on, at 10T available.

For the 'zfs list', remember there is a slight overhead for filesystem
formatting. 

So, instead of 

9 x 10^12 =~ 8.185 x (1024^4)

it shows 7.99TB usable. The roughly 200GB is the overhead. (or, about
3%).




-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Tonmaus
 My guess is unit conversion and rounding. Your pool
  has 11 base 10 TB, 
  which is 10.2445 base 2 TiB.
  
 Likewise your fs has 9 base 10 TB, which is 8.3819
  base 2 TiB.
 Not quite.  
 
 11 x 10^12 =~ 10.004 x (1024^4).
 
 So, the 'zpool list' is right on, at 10T available.

Duh! I completely forgot about this. Thanks for the heads-up.

Tonmaus
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Carson Gaspar

Someone wrote (I haven't seen the mail, only the unattributed quote):

My guess is unit conversion and rounding. Your pool
 has 11 base 10 TB, 
 which is 10.2445 base 2 TiB.
 
Likewise your fs has 9 base 10 TB, which is 8.3819

 base 2 TiB.


Not quite.  


11 x 10^12 =~ 10.004 x (1024^4).

So, the 'zpool list' is right on, at 10T available.


Duh, I was doing GiB math (y = x * 10^9 / 2^20), not TiB math (y = x * 
10^12 / 2^40).


Thanks for the correction.

--
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss