Re: [zfs-discuss] Disk usage

2009-03-17 Thread Michael Ramchand

Grant Lowe wrote:

Hey all,

I have a question/puzzle with zfs.  See the following:

bash-3.00# df -h | grep d25 ; zfs list | grep d25

FILESYSTEM SIZE   USED  AVAIL CAPACITY MOUNTED ON
r12_data/d25   *659G*40G*63G*39%/opt/d25/oakwc12
df -h says the d25 file system is 659GB?; 40GB used and 63GB available?
r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12

NAMEUSED AVAIL REFER MOUNTPOINT
r12_data/d25  760K  *62.7G*  39.9G  /opt/d25/oakwc12
zfs list says the db25 file system has 63GB available?
r12_data/d24 39.9G  2.14G  39.9G  /opt/d24/oakwcr12


Shouldn't the new filesystem (d25) size be what the clone was allocated?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

Hi Grant,

We'd need more info than that to figure what's actually going on.

Is d25 a clone of something? If so what? Can we see the specs of that as 
well.


Does d25 have any reservations or a quota?

What does zfs list of r12_data show?

Do you have snapshots? zfs list -t all will show you them.

Finally, the clone will only be the size of it's delta from the source.

HTH


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk usage

2009-03-17 Thread Grant Lowe

Hi Mike,

Yes, d25 is a clone of d24. Here are some data points about it:

bash-3.00# zfs get reservation r12_data/d25
NAME  PROPERTY VALUE SOURCE
r12_data/d25  reservation  none  default
bash-3.00# zfs get quota r12_data/d25
NAME  PROPERTY  VALUE SOURCE
r12_data/d25  quota none  default
bash-3.00#
bash-3.00# zfs list r12_data
NAME   USED  AVAIL  REFER  MOUNTPOINT
r12_data   596G  62.7G  24.5K  none
bash-3.00#
bash-3.00# zfs list -t snapshot r12_data/d...@a
NAME USED  AVAIL  REFER  MOUNTPOINT
r12_data/d...@a   904K  -  39.9G  -
bash-3.00#

Thanks for the response.  Did you need any more data points from me?



- Original Message 
From: Michael Ramchand mich...@ramchand.net
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 12:40:53 AM
Subject: Re: [zfs-discuss] Disk usage

Grant Lowe wrote:
 Hey all,

 I have a question/puzzle with zfs.  See the following:

 bash-3.00# df -h | grep d25 ; zfs list | grep d25

 FILESYSTEM SIZE   USED  AVAIL CAPACITY MOUNTED ON
 r12_data/d25   *659G*40G*63G*39%/opt/d25/oakwc12
 df -h says the d25 file system is 659GB?; 40GB used and 63GB available?
 r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12

 NAMEUSED AVAIL REFER MOUNTPOINT
 r12_data/d25  760K  *62.7G*  39.9G  /opt/d25/oakwc12
 zfs list says the db25 file system has 63GB available?
 r12_data/d24 39.9G  2.14G  39.9G  /opt/d24/oakwcr12


 Shouldn't the new filesystem (d25) size be what the clone was allocated?
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
Hi Grant,

We'd need more info than that to figure what's actually going on.

Is d25 a clone of something? If so what? Can we see the specs of that as 
well.

Does d25 have any reservations or a quota?

What does zfs list of r12_data show?

Do you have snapshots? zfs list -t all will show you them.

Finally, the clone will only be the size of it's delta from the source.

HTH

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk usage

2009-03-17 Thread Michael Ramchand

Well, it is kinda confusing...

In short, df -h will always return the size of the WHOLE pool for size 
(unless you've set a quota on the dataset in which case it says that), 
the amount of space that particular dataset is using for used, and the 
total amount of free space on the WHOLE pool for avail (unless you've 
got a quota or reservation set).


So in your original mail. 695G is the size of the DATA on the r12_data 
pool.  zfs list r12_data says you are using 596G. I think this is the 
RAW capacity used, and I reckon you are using compression. (zfs get 
compressratio r12_data will give you something like 1.1).


However, df -h of r12_data/d24 should have the identical 1st and 3rd 
fields, but they don't. (Could you re-run?)


Same goes for the zfs list commands.

Could you try doing zfs list -o space to get a fuller breakdown of how 
the space is being used.


Mike



Grant Lowe wrote:

Hi Mike,

Yes, d25 is a clone of d24. Here are some data points about it:

bash-3.00# zfs get reservation r12_data/d25
NAME  PROPERTY VALUE SOURCE
r12_data/d25  reservation  none  default
bash-3.00# zfs get quota r12_data/d25
NAME  PROPERTY  VALUE SOURCE
r12_data/d25  quota none  default
bash-3.00#
bash-3.00# zfs list r12_data
NAME   USED  AVAIL  REFER  MOUNTPOINT
r12_data   596G  62.7G  24.5K  none
bash-3.00#
bash-3.00# zfs list -t snapshot r12_data/d...@a
NAME USED  AVAIL  REFER  MOUNTPOINT
r12_data/d...@a   904K  -  39.9G  -
bash-3.00#

Thanks for the response.  Did you need any more data points from me?



- Original Message 
From: Michael Ramchand mich...@ramchand.net
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 12:40:53 AM
Subject: Re: [zfs-discuss] Disk usage

Grant Lowe wrote:
  

Hey all,

I have a question/puzzle with zfs.  See the following:

bash-3.00# df -h | grep d25 ; zfs list | grep d25

FILESYSTEM SIZE   USED  AVAIL CAPACITY MOUNTED ON
r12_data/d25   *659G*40G*63G*39%/opt/d25/oakwc12
df -h says the d25 file system is 659GB?; 40GB used and 63GB available?
r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12

NAMEUSED AVAIL REFER MOUNTPOINT
r12_data/d25  760K  *62.7G*  39.9G  /opt/d25/oakwc12
zfs list says the db25 file system has 63GB available?
r12_data/d24 39.9G  2.14G  39.9G  /opt/d24/oakwcr12


Shouldn't the new filesystem (d25) size be what the clone was allocated?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 


Hi Grant,

We'd need more info than that to figure what's actually going on.

Is d25 a clone of something? If so what? Can we see the specs of that as 
well.


Does d25 have any reservations or a quota?

What does zfs list of r12_data show?

Do you have snapshots? zfs list -t all will show you them.

Finally, the clone will only be the size of it's delta from the source.

HTH

  




smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk usage

2009-03-17 Thread Grant Lowe

Hi Mike,

Yes, that does help things.  Thanks.

bash-3.00# zfs get compression r12_data/d25
NAME  PROPERTY VALUE SOURCE
r12_data/d25  compression  off   default
bash-3.00# zfs get compression r12_data/d24
NAME  PROPERTY VALUE SOURCE
r12_data/d24  compression  onlocal
bash-3.00#

bash-3.00# df -h | grep d24
r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12
bash-3.00# df -h | grep d25
r12_data/d25   659G40G63G39%/opt/d25/oakwc12
bash-3.00#

When you asked me to do zfs list -o space, what option did you mean.  space 
isn't an option.




- Original Message 
From: Michael Ramchand mich...@ramchand.net
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 8:32:49 AM
Subject: Re: [zfs-discuss] Disk usage

Well, it is kinda confusing...

In short, df -h will always return the size of the WHOLE pool for size 
(unless you've set a quota on the dataset in which case it says that), 
the amount of space that particular dataset is using for used, and the 
total amount of free space on the WHOLE pool for avail (unless you've 
got a quota or reservation set).

So in your original mail. 695G is the size of the DATA on the r12_data 
pool.  zfs list r12_data says you are using 596G. I think this is the 
RAW capacity used, and I reckon you are using compression. (zfs get 
compressratio r12_data will give you something like 1.1).

However, df -h of r12_data/d24 should have the identical 1st and 3rd 
fields, but they don't. (Could you re-run?)

Same goes for the zfs list commands.

Could you try doing zfs list -o space to get a fuller breakdown of how 
the space is being used.

Mike



Grant Lowe wrote:
 Hi Mike,

 Yes, d25 is a clone of d24. Here are some data points about it:

 bash-3.00# zfs get reservation r12_data/d25
 NAME  PROPERTY VALUE SOURCE
 r12_data/d25  reservation  none  default
 bash-3.00# zfs get quota r12_data/d25
 NAME  PROPERTY  VALUE SOURCE
 r12_data/d25  quota none  default
 bash-3.00#
 bash-3.00# zfs list r12_data
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 r12_data   596G  62.7G  24.5K  none
 bash-3.00#
 bash-3.00# zfs list -t snapshot r12_data/d...@a
 NAME USED  AVAIL  REFER  MOUNTPOINT
 r12_data/d...@a   904K  -  39.9G  -
 bash-3.00#

 Thanks for the response.  Did you need any more data points from me?



 - Original Message 
 From: Michael Ramchand mich...@ramchand.net
 To: Grant Lowe gl...@sbcglobal.net
 Cc: zfs-discuss@opensolaris.org
 Sent: Tuesday, March 17, 2009 12:40:53 AM
 Subject: Re: [zfs-discuss] Disk usage

 Grant Lowe wrote:
  
 Hey all,

 I have a question/puzzle with zfs.  See the following:

 bash-3.00# df -h | grep d25 ; zfs list | grep d25

 FILESYSTEM SIZE   USED  AVAIL CAPACITY MOUNTED ON
 r12_data/d25   *659G*40G*63G*39%/opt/d25/oakwc12
 df -h says the d25 file system is 659GB?; 40GB used and 63GB available?
 r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12

 NAMEUSED AVAIL REFER MOUNTPOINT
 r12_data/d25  760K  *62.7G*  39.9G  /opt/d25/oakwc12
 zfs list says the db25 file system has 63GB available?
 r12_data/d24 39.9G  2.14G  39.9G  /opt/d24/oakwcr12


 Shouldn't the new filesystem (d25) size be what the clone was allocated?
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

 Hi Grant,

 We'd need more info than that to figure what's actually going on.

 Is d25 a clone of something? If so what? Can we see the specs of that as 
 well.

 Does d25 have any reservations or a quota?

 What does zfs list of r12_data show?

 Do you have snapshots? zfs list -t all will show you them.

 Finally, the clone will only be the size of it's delta from the source.

 HTH

  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk usage

2009-03-17 Thread Grant Lowe

If you meant available, here's the output of that:

bash-3.00# zfs list -o available r12_data
AVAIL
62.7G
bash-3.00# zfs list -o available r12_data/d24
AVAIL
2.14G
bash-3.00# zfs list -o available r12_data/d25
AVAIL
62.7G
bash-3.00#




- Original Message 
From: Michael Ramchand mich...@ramchand.net
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 8:32:49 AM
Subject: Re: [zfs-discuss] Disk usage

Well, it is kinda confusing...

In short, df -h will always return the size of the WHOLE pool for size 
(unless you've set a quota on the dataset in which case it says that), 
the amount of space that particular dataset is using for used, and the 
total amount of free space on the WHOLE pool for avail (unless you've 
got a quota or reservation set).

So in your original mail. 695G is the size of the DATA on the r12_data 
pool.  zfs list r12_data says you are using 596G. I think this is the 
RAW capacity used, and I reckon you are using compression. (zfs get 
compressratio r12_data will give you something like 1.1).

However, df -h of r12_data/d24 should have the identical 1st and 3rd 
fields, but they don't. (Could you re-run?)

Same goes for the zfs list commands.

Could you try doing zfs list -o space to get a fuller breakdown of how 
the space is being used.

Mike



Grant Lowe wrote:
 Hi Mike,

 Yes, d25 is a clone of d24. Here are some data points about it:

 bash-3.00# zfs get reservation r12_data/d25
 NAME  PROPERTY VALUE SOURCE
 r12_data/d25  reservation  none  default
 bash-3.00# zfs get quota r12_data/d25
 NAME  PROPERTY  VALUE SOURCE
 r12_data/d25  quota none  default
 bash-3.00#
 bash-3.00# zfs list r12_data
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 r12_data   596G  62.7G  24.5K  none
 bash-3.00#
 bash-3.00# zfs list -t snapshot r12_data/d...@a
 NAME USED  AVAIL  REFER  MOUNTPOINT
 r12_data/d...@a   904K  -  39.9G  -
 bash-3.00#

 Thanks for the response.  Did you need any more data points from me?



 - Original Message 
 From: Michael Ramchand mich...@ramchand.net
 To: Grant Lowe gl...@sbcglobal.net
 Cc: zfs-discuss@opensolaris.org
 Sent: Tuesday, March 17, 2009 12:40:53 AM
 Subject: Re: [zfs-discuss] Disk usage

 Grant Lowe wrote:
  
 Hey all,

 I have a question/puzzle with zfs.  See the following:

 bash-3.00# df -h | grep d25 ; zfs list | grep d25

 FILESYSTEM SIZE   USED  AVAIL CAPACITY MOUNTED ON
 r12_data/d25   *659G*40G*63G*39%/opt/d25/oakwc12
 df -h says the d25 file system is 659GB?; 40GB used and 63GB available?
 r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12

 NAMEUSED AVAIL REFER MOUNTPOINT
 r12_data/d25  760K  *62.7G*  39.9G  /opt/d25/oakwc12
 zfs list says the db25 file system has 63GB available?
 r12_data/d24 39.9G  2.14G  39.9G  /opt/d24/oakwcr12


 Shouldn't the new filesystem (d25) size be what the clone was allocated?
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

 Hi Grant,

 We'd need more info than that to figure what's actually going on.

 Is d25 a clone of something? If so what? Can we see the specs of that as 
 well.

 Does d25 have any reservations or a quota?

 What does zfs list of r12_data show?

 Do you have snapshots? zfs list -t all will show you them.

 Finally, the clone will only be the size of it's delta from the source.

 HTH

  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk usage

2009-03-17 Thread Michael Ramchand

Sorry, no , I assume you are on Sol 10.

othe value space to  display  space  usage
 properties  on  file  systems  and volumes.
 Thisisashortcut for -o
 name,avail,used,usedsnap,usedds,
 usedrefreserv,usedchild  -t
 filesystem,volume.


Grant Lowe wrote:

If you meant available, here's the output of that:

bash-3.00# zfs list -o available r12_data
AVAIL
62.7G
bash-3.00# zfs list -o available r12_data/d24
AVAIL
2.14G
bash-3.00# zfs list -o available r12_data/d25
AVAIL
62.7G
bash-3.00#




- Original Message 
From: Michael Ramchand mich...@ramchand.net
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 8:32:49 AM
Subject: Re: [zfs-discuss] Disk usage

Well, it is kinda confusing...

In short, df -h will always return the size of the WHOLE pool for size 
(unless you've set a quota on the dataset in which case it says that), 
the amount of space that particular dataset is using for used, and the 
total amount of free space on the WHOLE pool for avail (unless you've 
got a quota or reservation set).


So in your original mail. 695G is the size of the DATA on the r12_data 
pool.  zfs list r12_data says you are using 596G. I think this is the 
RAW capacity used, and I reckon you are using compression. (zfs get 
compressratio r12_data will give you something like 1.1).


However, df -h of r12_data/d24 should have the identical 1st and 3rd 
fields, but they don't. (Could you re-run?)


Same goes for the zfs list commands.

Could you try doing zfs list -o space to get a fuller breakdown of how 
the space is being used.


Mike



Grant Lowe wrote:
  

Hi Mike,

Yes, d25 is a clone of d24. Here are some data points about it:

bash-3.00# zfs get reservation r12_data/d25
NAME  PROPERTY VALUE SOURCE
r12_data/d25  reservation  none  default
bash-3.00# zfs get quota r12_data/d25
NAME  PROPERTY  VALUE SOURCE
r12_data/d25  quota none  default
bash-3.00#
bash-3.00# zfs list r12_data
NAME   USED  AVAIL  REFER  MOUNTPOINT
r12_data   596G  62.7G  24.5K  none
bash-3.00#
bash-3.00# zfs list -t snapshot r12_data/d...@a
NAME USED  AVAIL  REFER  MOUNTPOINT
r12_data/d...@a   904K  -  39.9G  -
bash-3.00#

Thanks for the response.  Did you need any more data points from me?



- Original Message 
From: Michael Ramchand mich...@ramchand.net
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 12:40:53 AM
Subject: Re: [zfs-discuss] Disk usage

Grant Lowe wrote:
 


Hey all,

I have a question/puzzle with zfs.  See the following:

bash-3.00# df -h | grep d25 ; zfs list | grep d25

FILESYSTEM SIZE   USED  AVAIL CAPACITY MOUNTED ON
r12_data/d25   *659G*40G*63G*39%/opt/d25/oakwc12
df -h says the d25 file system is 659GB?; 40GB used and 63GB available?
r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12

NAMEUSED AVAIL REFER MOUNTPOINT
r12_data/d25  760K  *62.7G*  39.9G  /opt/d25/oakwc12
zfs list says the db25 file system has 63GB available?
r12_data/d24 39.9G  2.14G  39.9G  /opt/d24/oakwcr12


Shouldn't the new filesystem (d25) size be what the clone was allocated?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
   
  

Hi Grant,

We'd need more info than that to figure what's actually going on.

Is d25 a clone of something? If so what? Can we see the specs of that as 
well.


Does d25 have any reservations or a quota?

What does zfs list of r12_data show?

Do you have snapshots? zfs list -t all will show you them.

Finally, the clone will only be the size of it's delta from the source.

HTH

 





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [cifs-discuss] CIFS accessing ZFS, workgroup mode, ACL problems

2009-03-17 Thread David Dyer-Bennet

On Mon, March 16, 2009 06:10, Tobs wrote:

 There's a share with this A=everyboy@:full_set:fd:allow folder_name
 permission set, but it seems that people didn't get identified the right
 way.

 For example, its not possible to start Portable Thunderbird from this cifs
 share.

 Did you used the idmap command to configure Windows/UNIX mappings
 manually?

I believe that applies only to domain membership, not workgroups.  In any
case, I didn't do anything with it.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [cifs-discuss] CIFS accessing ZFS, workgroup mode, ACL problems

2009-03-17 Thread David Dyer-Bennet

On Sun, March 15, 2009 15:37, Ross wrote:
 Not sure if this is what you mean, but I always start CIFS shares by
 granting everybody full permissions, and then set the rest from windows.
 I find otherwise deny permissions cause all kinds of problems since
 they're implemented differently on windows.

Oh, is THAT what they were saying?  I saw many suggestions that setting
everything to full permissions would fix the problem, but no indication
that that was just an interim measure until they set the real permissions
from the windows side.  Since it's obviously horrible for security, I was
ignoring it.

Hmmm; but then I read a post or email from somebody last night telling me
that setting ACLs from Windows wasn't currently supported?  And indeed my
experiments, while they claimed to succeed, didn't actually change the
permissions.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] AVS and ZFS demos - link broken?

2009-03-17 Thread Erast Benson
James,

also there is this demo:

http://www.nexenta.com/demos/auto-cdp.html

showing how AVS/ZFS integrated in NexentaStor.

On Tue, 2009-03-17 at 10:25 -0600, James D. Rogers wrote:
 The links to the Part 1 and Part 2 demos on this page
 (http://www.opensolaris.org/os/project/avs/Demos/) appear to be
 broken.
 
  
 
 http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V1/ 
 
 http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V2/ 
 
  
 
 James D. Rogers
 
 NRA, GOA, DAD -- and I VOTE!
 
 2207 Meadowgreen Circle
 
 Franktown, CO 80116
 
  
 
 coyote_hunt...@msn.com
 
 303-688-0480
 
 303-885-7410 Cell (Working hours and when coyote huntin'!)
 
  
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] AVS and ZFS demos - link broken?

2009-03-17 Thread James D. Rogers
The links to the Part 1 and Part 2 demos on this page
(http://www.opensolaris.org/os/project/avs/Demos/) appear to be broken.

 

http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V1/ 

http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V2/ 

 

James D. Rogers

NRA, GOA, DAD -- and I VOTE!

2207 Meadowgreen Circle

Franktown, CO 80116

 

 mailto:coyote_hunt...@msn.com coyote_hunt...@msn.com

303-688-0480

303-885-7410 Cell (Working hours and when coyote huntin'!)

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Neal Pollack

I'm setting up a new X4500 Thumper, and noticed suggestions/blogs
for setting up two boot disks as a zfs rpool mirror during installation.
But I can't seem to find instructions/examples for how to do this using
google, the blogs, or the Sun docs for X4500.

Can anyone share some instructions for setting up the rpool mirror
of the boot disks during the Solaris Nevada (SXCE) install?

Thanks,

Neal

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Mark J Musante

On Tue, 17 Mar 2009, Neal Pollack wrote:

Can anyone share some instructions for setting up the rpool mirror of 
the boot disks during the Solaris Nevada (SXCE) install?


You'll need to use the text-based installer, and in there you choose two 
the two bootable disks instead of just one.  They're automatically 
mirrored.



Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Cindy . Swearingen

Neal,

You'll need to use the text-based initial install option.
The steps for configuring a ZFS root pool during an initial
install are covered here:

http://opensolaris.org/os/community/zfs/docs/

Page 114:

Example 4–1 Initial Installation of a Bootable ZFS Root File System

Step 3, you'll be presented with the disks to be selected as in previous 
releases. So, for example, to select the boot disks on the Thumper,

select both of them:

[x] c5t0d0
[x] c4t0d0
.
.
.


On our lab Thumper, they are c5t0 and c4t0.

Cindy

Neal Pollack wrote:

I'm setting up a new X4500 Thumper, and noticed suggestions/blogs
for setting up two boot disks as a zfs rpool mirror during installation.
But I can't seem to find instructions/examples for how to do this using
google, the blogs, or the Sun docs for X4500.

Can anyone share some instructions for setting up the rpool mirror
of the boot disks during the Solaris Nevada (SXCE) install?

Thanks,

Neal

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Toby Thain


On 17-Mar-09, at 3:32 PM, cindy.swearin...@sun.com wrote:


Neal,

You'll need to use the text-based initial install option.
The steps for configuring a ZFS root pool during an initial
install are covered here:

http://opensolaris.org/os/community/zfs/docs/

Page 114:

Example 4–1 Initial Installation of a Bootable ZFS Root File System

Step 3, you'll be presented with the disks to be selected as in  
previous releases. So, for example, to select the boot disks on the  
Thumper,

select both of them:


Right, but what if you didn't realise on that screen that you needed  
to select both to make a mirror? The wording isn't very explicit, in  
my opinion. Yesterday I did my first Solaris 10 ZFS root install and  
didn't interpret this screen correctly. I chose one disk, so I'm the  
OP's situation and want to set up the mirror retrospectively.


I'm using an X2100. Unfortunately when I try to zpool attach, I get a  
Device busy error on the 2nd drive. But probably I'm making a n00b  
error.


--Toby



[x] c5t0d0
[x] c4t0d0
.
.
.


On our lab Thumper, they are c5t0 and c4t0.

Cindy

Neal Pollack wrote:

I'm setting up a new X4500 Thumper, and noticed suggestions/blogs
for setting up two boot disks as a zfs rpool mirror during  
installation.
But I can't seem to find instructions/examples for how to do this  
using

google, the blogs, or the Sun docs for X4500.
Can anyone share some instructions for setting up the rpool mirror
of the boot disks during the Solaris Nevada (SXCE) install?
Thanks,
Neal
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Bryan Allen
+--
| On 2009-03-17 16:13:27, Toby Thain wrote:
| 
| Right, but what if you didn't realise on that screen that you needed  
| to select both to make a mirror? The wording isn't very explicit, in  
| my opinion. Yesterday I did my first Solaris 10 ZFS root install and  
| didn't interpret this screen correctly. I chose one disk, so I'm the  
| OP's situation and want to set up the mirror retrospectively.
| 
| I'm using an X2100. Unfortunately when I try to zpool attach, I get a  
| Device busy error on the 2nd drive. But probably I'm making a n00b  
| error.

Use format(1M) to ensure the second disk (c1t1d0) is formatted as
100% Solaris.

Then mirror the VTOC from the first (zfsroot) disk to the second:

# prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
# zpool attach -f rpool c1t0d0s0 c1t1d0s0
# zpool status -v
-- 
bda
Cyberpunk is dead.  Long live cyberpunk.
http://mirrorshades.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Bryan Allen
+--
| On 2009-03-17 16:37:25, Mark J Musante wrote:
| 
| Then mirror the VTOC from the first (zfsroot) disk to the second:
| 
| # prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
| # zpool attach -f rpool c1t0d0s0 c1t1d0s0
| # zpool status -v
| 
| And then you'll still need to run installgrub to put grub on the  
| mirror.  That's not yet automatically done.

Ah, yes, thanks.

installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0

Knew I forgot something. Got distracted by local boom.
-- 
bda
Cyberpunk is dead.  Long live cyberpunk.
http://mirrorshades.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Mounting zfs file systems

2009-03-17 Thread Grant Lowe

Another newbie question:

I have a new system with zfs. I create a directory:

bash-3.00# mkdir -p /opt/mis/oracle/data/db1

I do my zpool:

bash-3.00# zpool create -f oracle c2t5006016B306005AAd0 c2t5006016B306005AAd1 
c2t5006016B306005AAd3 c2t5006016B306005AAd4 c2t5006016B306005AAd5 
c2t5006016B306005AAd6 c2t5006016B306005AAd7 c2t5006016B306005AAd8 
c2t5006016B306005AAd9 c2t5006016B306005AAd10 c2t5006016B306005AAd11 
c2t5006016B306005AAd12 c2t5006016B306005AAd13 c2t5006016B306005AAd14 
c2t5006016B306005AAd15 c2t5006016B306005AAd16 c2t5006016B306005AAd17 
c2t5006016B306005AAd18 c2t5006016B306005AAd19
bash-3.00# zfs create oracle/prd_data
bash-3.00# zfs create -b 8192 -V 44Gb oracle/prd_data/db1

I'm trying to set a mountpoint.  But trying to mount it doesn't work.

bash-3.00# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
oracle   44.0G   653G  25.5K  /oracle
oracle/prd_data  44.0G   653G  24.5K  /oracle/prd_data
oracle/prd_data/db1  22.5K   697G  22.5K  -
bash-3.00# zfs set mountpoint=/opt/mis/oracle/data/db1 oracle/prd_data/db1
cannot set property for 'oracle/prd_data/db1': 'mountpoint' does not apply to 
datasets of this type
bash-3.00#

What's the correct syntax?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mounting zfs file systems

2009-03-17 Thread Grant Lowe

Ok, Cindy.  Thanks. I would like to have one big pool and divide it into 
separate file systems for an Oracle database.  What I had before was a separate 
pool for each file system.  So does it look I have to go back to what I had 
before?



- Original Message 
From: cindy.swearin...@sun.com cindy.swearin...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 2:20:18 PM
Subject: Re: [zfs-discuss] Mounting zfs file systems

Grant,

If I'm following correctly, you can't mount a ZFS resource
outside of the pool from which the resource resides.

Is this a UFS directory, here:

# mkdir -p /opt/mis/oracle/data/db1

What are you trying to do?

Cindy

Grant Lowe wrote:
 Another newbie question:
 
 I have a new system with zfs. I create a directory:
 
 bash-3.00# mkdir -p /opt/mis/oracle/data/db1
 
 I do my zpool:
 
 bash-3.00# zpool create -f oracle c2t5006016B306005AAd0 c2t5006016B306005AAd1 
 c2t5006016B306005AAd3 c2t5006016B306005AAd4 c2t5006016B306005AAd5 
 c2t5006016B306005AAd6 c2t5006016B306005AAd7 c2t5006016B306005AAd8 
 c2t5006016B306005AAd9 c2t5006016B306005AAd10 c2t5006016B306005AAd11 
 c2t5006016B306005AAd12 c2t5006016B306005AAd13 c2t5006016B306005AAd14 
 c2t5006016B306005AAd15 c2t5006016B306005AAd16 c2t5006016B306005AAd17 
 c2t5006016B306005AAd18 c2t5006016B306005AAd19
 bash-3.00# zfs create oracle/prd_data
 bash-3.00# zfs create -b 8192 -V 44Gb oracle/prd_data/db1
 
 I'm trying to set a mountpoint.  But trying to mount it doesn't work.
 
 bash-3.00# zfs list
 NAME  USED  AVAIL  REFER  MOUNTPOINT
 oracle   44.0G   653G  25.5K  /oracle
 oracle/prd_data  44.0G   653G  24.5K  /oracle/prd_data
 oracle/prd_data/db1  22.5K   697G  22.5K  -
 bash-3.00# zfs set mountpoint=/opt/mis/oracle/data/db1 oracle/prd_data/db1
 cannot set property for 'oracle/prd_data/db1': 'mountpoint' does not apply to 
 datasets of this type
 bash-3.00#
 
 What's the correct syntax?
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mounting zfs file systems

2009-03-17 Thread Lori Alt

no, this is an incorrect diagnosis.  The problem is that by
using the -V option, you created a volume, not a file system. 
That is, you created a raw device.  You could then newfs

a ufs file system within the volume, but that is almost certainly
not what you want.

Don't use -V when you create the oracle/prd_data/db1
dataset.  Then it will be a mountable  file system.  You
will need to give it a mount point however by setting the
mountpoint property, since the default mountpoint won't
be what you want.

Lori


On 03/17/09 15:45, Grant Lowe wrote:

Ok, Cindy.  Thanks. I would like to have one big pool and divide it into 
separate file systems for an Oracle database.  What I had before was a separate 
pool for each file system.  So does it look I have to go back to what I had 
before?



- Original Message 
From: cindy.swearin...@sun.com cindy.swearin...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 2:20:18 PM
Subject: Re: [zfs-discuss] Mounting zfs file systems

Grant,

If I'm following correctly, you can't mount a ZFS resource
outside of the pool from which the resource resides.

Is this a UFS directory, here:

# mkdir -p /opt/mis/oracle/data/db1

What are you trying to do?

Cindy

Grant Lowe wrote:
  

Another newbie question:

I have a new system with zfs. I create a directory:

bash-3.00# mkdir -p /opt/mis/oracle/data/db1

I do my zpool:

bash-3.00# zpool create -f oracle c2t5006016B306005AAd0 c2t5006016B306005AAd1 
c2t5006016B306005AAd3 c2t5006016B306005AAd4 c2t5006016B306005AAd5 
c2t5006016B306005AAd6 c2t5006016B306005AAd7 c2t5006016B306005AAd8 
c2t5006016B306005AAd9 c2t5006016B306005AAd10 c2t5006016B306005AAd11 
c2t5006016B306005AAd12 c2t5006016B306005AAd13 c2t5006016B306005AAd14 
c2t5006016B306005AAd15 c2t5006016B306005AAd16 c2t5006016B306005AAd17 
c2t5006016B306005AAd18 c2t5006016B306005AAd19
bash-3.00# zfs create oracle/prd_data
bash-3.00# zfs create -b 8192 -V 44Gb oracle/prd_data/db1

I'm trying to set a mountpoint.  But trying to mount it doesn't work.

bash-3.00# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
oracle   44.0G   653G  25.5K  /oracle
oracle/prd_data  44.0G   653G  24.5K  /oracle/prd_data
oracle/prd_data/db1  22.5K   697G  22.5K  -
bash-3.00# zfs set mountpoint=/opt/mis/oracle/data/db1 oracle/prd_data/db1
cannot set property for 'oracle/prd_data/db1': 'mountpoint' does not apply to 
datasets of this type
bash-3.00#

What's the correct syntax?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mounting zfs file systems

2009-03-17 Thread Chris Kirby

On Mar 17, 2009, at 4:45 PM, Grant Lowe wrote:


bash-3.00# zfs create -b 8192 -V 44Gb oracle/prd_data/db1

I'm trying to set a mountpoint.  But trying to mount it doesn't work.

bash-3.00# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
oracle   44.0G   653G  25.5K  /oracle
oracle/prd_data  44.0G   653G  24.5K  /oracle/prd_data
oracle/prd_data/db1  22.5K   697G  22.5K  -
bash-3.00# zfs set mountpoint=/opt/mis/oracle/data/db1 oracle/ 
prd_data/db1
cannot set property for 'oracle/prd_data/db1': 'mountpoint' does  
not apply to datasets of this type


The issue is, you can't set a mountpoint on a zvol, there's no  
filesystem
on there yet. Once you've created whatever (non-ZFS) filesystem on  
that zvol,

then you can either mount it manually or set up an entry in /etc/vfstab.

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Neal Pollack

On 03/17/09 12:32 PM, cindy.swearin...@sun.com wrote:

Neal,

You'll need to use the text-based initial install option.
The steps for configuring a ZFS root pool during an initial
install are covered here:

http://opensolaris.org/os/community/zfs/docs/

Page 114:

Example 4–1 Initial Installation of a Bootable ZFS Root File System

Step 3, you'll be presented with the disks to be selected as in 
previous releases. So, for example, to select the boot disks on the 
Thumper,

select both of them:

[x] c5t0d0
[x] c4t0d0



Why have the controller numbers/mappings changed between Solaris 10 and
Solaris Nevada?   I just installed Solaris Nevada 110 to see what it 
would do.
Thank you, and I now understand that to find the disk name, like above 
c5t0d0

for physical slot 0 on X4500, I can use  cfgadm | grep sata3/0

I also now understand that in the installer screens, I can select 2 
disks and they

will become a mirrored root zpool.

What I do not understand, is that on Solaris Nevada 110,  the x4500 
Thumper physical
disk slots 0 and 1 are labeled as controller 3 and not controller 5. 
For example;


# cfgadm | grep sata3/0
sata3/0::dsk/c3t0d0disk connectedconfigured   ok
# cfgadm | grep sata3/4
sata3/4::dsk/c3t4d0disk connectedconfigured   ok
# uname -a
SunOS zcube-1 5.11 snv_110 i86pc i386 i86pc
#


Of course, that means I shold stay away from all the X4500 and ZFS docs if
I run Solaris Nevada on an X4500?

Any ideas why the mapping is not matching s10 or the docs?

Cheers,

Neal


.
.
.


On our lab Thumper, they are c5t0 and c4t0.

Cindy

Neal Pollack wrote:

I'm setting up a new X4500 Thumper, and noticed suggestions/blogs
for setting up two boot disks as a zfs rpool mirror during installation.
But I can't seem to find instructions/examples for how to do this using
google, the blogs, or the Sun docs for X4500.

Can anyone share some instructions for setting up the rpool mirror
of the boot disks during the Solaris Nevada (SXCE) install?

Thanks,

Neal

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS usable space calculations (Help!)

2009-03-17 Thread Brent Wagner
Can someone point me to a document describing how available space in a
zfs is calculated or review the data below and tell me what I'm
missing? 

Thanks in advance,
-Brent
===
I have a home project with 3x250 GB+3x300 GB in raidz, so I expect to
lose 1x300 GB to parity.

Total size:1650GB
Total size using 1024 to measure: ~1534 GB

Expected raidz zpool size after losing 300 GB to parity: ~1350 GB
Expected raidz zpool size using 1024 to measure: ~1255.5 GB

Actual zpool size: 1.36T

Single zfs on the pool - available size: 1.11T

I realize zfs is going to have some overhead but 250 GB seems a little
excessive...right? I thought maybe the zpool was showing all 6 disks and
the filesystem reflected the remaining space after discounting the
parity disk but that doesn't add up in a way that makes sense either
(see above). Can someone help explain these numbers?

Thanks,
-Brent


-- 
Brent Wagner
Support Engineer - Windows/Linux/VMware
Sun Microsystems, Inc.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Mark J Musante


On 17 Mar, 2009, at 16.21, Bryan Allen wrote:


Then mirror the VTOC from the first (zfsroot) disk to the second:

# prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
# zpool attach -f rpool c1t0d0s0 c1t1d0s0
# zpool status -v


And then you'll still need to run installgrub to put grub on the  
mirror.  That's not yet automatically done.



Regards,
markm


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mounting zfs file systems

2009-03-17 Thread Grant Lowe
Great explanation.  Thanks, Lori.





From: Lori Alt lori@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: cindy.swearin...@sun.com; zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 2:52:04 PM
Subject: Re: [zfs-discuss] Mounting zfs file systems

no, this is an incorrect diagnosis.  The problem is that by
using the -V option, you created a volume, not a file system.  
That is, you created a raw device.  You could then newfs
a ufs file system within the volume, but that is almost certainly
not what you want.

Don't use -V when you create the oracle/prd_data/db1
dataset.  Then it will be a mountable  file system.  You
will need to give it a mount point however by setting the
mountpoint property, since the default mountpoint won't
be what you want.

Lori


On 03/17/09 15:45, Grant Lowe wrote: 
Ok, Cindy.  Thanks. I would like to have one big pool and divide it into 
separate file systems for an Oracle database.  What I had before was a separate 
pool for each file system.  So does it look I have to go back to what I had 
before?



- Original Message 
From: cindy.swearin...@sun.com cindy.swearin...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 2:20:18 PM
Subject: Re: [zfs-discuss] Mounting zfs file systems

Grant,

If I'm following correctly, you can't mount a ZFS resource
outside of the pool from which the resource resides.

Is this a UFS directory, here:

# mkdir -p /opt/mis/oracle/data/db1

What are you trying to do?

Cindy

Grant Lowe wrote:
  
Another newbie question:

I have a new system with zfs. I create a directory:

bash-3.00# mkdir -p /opt/mis/oracle/data/db1

I do my zpool:

bash-3.00# zpool create -f oracle c2t5006016B306005AAd0 c2t5006016B306005AAd1 
c2t5006016B306005AAd3 c2t5006016B306005AAd4 c2t5006016B306005AAd5 
c2t5006016B306005AAd6 c2t5006016B306005AAd7 c2t5006016B306005AAd8 
c2t5006016B306005AAd9 c2t5006016B306005AAd10 c2t5006016B306005AAd11 
c2t5006016B306005AAd12 c2t5006016B306005AAd13 c2t5006016B306005AAd14 
c2t5006016B306005AAd15 c2t5006016B306005AAd16 c2t5006016B306005AAd17 
c2t5006016B306005AAd18 c2t5006016B306005AAd19
bash-3.00# zfs create oracle/prd_data
bash-3.00# zfs create -b 8192 -V 44Gb oracle/prd_data/db1

I'm trying to set a mountpoint.  But trying to mount it doesn't work.

bash-3.00# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
oracle   44.0G   653G  25.5K  /oracle
oracle/prd_data  44.0G   653G  24.5K  /oracle/prd_data
oracle/prd_data/db1  22.5K   697G  22.5K  -
bash-3.00# zfs set mountpoint=/opt/mis/oracle/data/db1 oracle/prd_data/db1
cannot set property for 'oracle/prd_data/db1': 'mountpoint' does not apply to 
datasets of this type
bash-3.00#

What's the correct syntax?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org 
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org 
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS usable space calculations (Help!)

2009-03-17 Thread Craig Cory
Brent,


Brent Wagner wrote:
 Can someone point me to a document describing how available space in a
 zfs is calculated or review the data below and tell me what I'm
 missing?

 Thanks in advance,
 -Brent
 ===
 I have a home project with 3x250 GB+3x300 GB in raidz, so I expect to
 lose 1x300 GB to parity.

 Total size:1650GB
 Total size using 1024 to measure: ~1534 GB

 Expected raidz zpool size after losing 300 GB to parity: ~1350 GB
 Expected raidz zpool size using 1024 to measure: ~1255.5 GB

 Actual zpool size: 1.36T

 Single zfs on the pool - available size: 1.11T

 I realize zfs is going to have some overhead but 250 GB seems a little
 excessive...right? I thought maybe the zpool was showing all 6 disks and
 the filesystem reflected the remaining space after discounting the
 parity disk but that doesn't add up in a way that makes sense either
 (see above). Can someone help explain these numbers?

 Thanks,
 -Brent



When you say 3x250 GB+3x300 GB in raidz do you mean:

1) # zpool create mypool raidz 250gb-1 250gb-2 250gb-3 300gb-1 \
 300gb-2 300gb-3

or

2) # zpool create mypool raidz 250gb-1 250gb-2 250gb-3 \
 raidz 300gb-1 300gb-2 300gb-3

As I understand it, #1 would waste the extra 50gb on each 300gb drive and give
you 1500gb usable space. 250gb of that (1/6th) would be parity, so 1250gb
data space.

#2 would make 2 vdevs of 750gb and 900gb totaling 1650gb space. Parity would
use 250gb from the 1st vdev and 300gb from the second; so 1100gb of data
space is available.

Either way, when you list raidz* pools with
 # zpool list
you see the total physical space. When you list the filesystems with
 # zfs list
you get the usable filesystem space, which is where the parity is implemented.

Here's an example with 250MB files and 300MB files:

For #1 scenario:

# zpool create -f mypool1 raidz /250d1 /250d2 /250d3 /300d1 /300d2 /300d3

# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool11.44G145K   1.44G 0%  ONLINE -

# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
mypool1   115K  1.16G  40.7K  /mypool1

# zpool status
  pool: mypool1
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool1 ONLINE   0 0 0
  raidz1ONLINE   0 0 0
/250d1  ONLINE   0 0 0
/250d2  ONLINE   0 0 0
/250d3  ONLINE   0 0 0
/300d1  ONLINE   0 0 0
/300d2  ONLINE   0 0 0
/300d3  ONLINE   0 0 0

--
And for #2:
# zpool create -f mypool2 raidz /250d1 /250d2 /250d3 raidz /300d1 /300d2 /300d3

# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool21.58G157K   1.58G 0%  ONLINE -

# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
mypool2   101K  1.02G  32.6K  /mypool2

# zpool status
  pool: mypool2
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool2 ONLINE   0 0 0
  raidz1ONLINE   0 0 0
/250d1  ONLINE   0 0 0
/250d2  ONLINE   0 0 0
/250d3  ONLINE   0 0 0
  raidz1ONLINE   0 0 0
/300d1  ONLINE   0 0 0
/300d2  ONLINE   0 0 0
/300d3  ONLINE   0 0 0

errors: No known data errors
---

Does this describe what you're seeing?


Craig



-- 
Craig Cory
 Senior Instructor :: ExitCertified
 : Sun Certified System Administrator
 : Sun Certified Network Administrator
 : Sun Certified Security Administrator
 : Veritas Certified Instructor

 8950 Cal Center Drive
 Bldg 1, Suite 110
 Sacramento, California  95826
 [e] craig.c...@exitcertified.com
 [p] 916.669.3970
 [f] 916.669.3977
 [w] WWW.EXITCERTIFIED.COM
+-+
   OTTAWA | SACRAMENTO | MONTREAL | LAS VEGAS | QUEBEC CITY | CALGARY
SAN FRANCISCO | VANCOUVER | REGINA | WINNIPEG | TORONTO

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS usable space calculations (Help!)

2009-03-17 Thread Brent Wagner
Wow Craig - thank you so much for that thorough response.

I am only using 1 vdev and I didn't realize two things:

1) that 50 GB on each of the 300s is essentially wasted. I thought it
would spread 300 GB of parity across all 6 disks, leaving me with 1350
GB of data space. Instead, you're saying 250 GB of parity is spread
across all 6 disks and an additional 150 GB is ignored, leaving me with,
as you said, 1250 GB data space.

2) I wasn't sure if zpool list described the total before or after
parity was subtracted (and extra space ignored). Thanks for clearing
that up.

However, I'm still a little confused how this adds up to 1.11T and 1.36T
for zfs list and zpool list, respectively (my box is under construction
atm so I can't capture the exact output).

To minimize the wasted space, can I create 1x250 and 1x50 GB partitions
on each of the 300 GB drives, then make them a new raidz vdev?

1st raidz 6x250 GB partitions: 1250 GB data space
2nd raidz 3x50 GB partitions: 100 GB data space
total: 1350 GB data space

Can I pool these together into one large pool? I'm trying to think it if
would be possible to lose data based on any one disk failure.

As my signature says, despite working for Sun I work on everything *but*
Solaris every day, so I really appreciate the guidance.

Just an FYI - I started with opensolaris but I needed to add a combo
IDE/SATA card to handle all my spare disks (mostly varying sizes) -
which wasn't recognized by opensolaris. I moved to Linux (which detects
the drives on the I/O card) with zfs-fuse (filesystem in userspace -
steps around CDDL/GPL incompatibility) but I found the features limited
and where it compiled on one distribution that had its own set of
unrelated issues, it failed to compile on another. Currently, I'm
installing OpenSolaris in a VirtualBox VM on a Linux host using raw disk
passthrough so I can use zfs with this I/O card. We'll see how it
goes :)

Thanks again,
-Brent

On Tue, 2009-03-17 at 18:02 -0700, Craig Cory wrote:
 Brent,
 
 
 Brent Wagner wrote:
  Can someone point me to a document describing how available space in a
  zfs is calculated or review the data below and tell me what I'm
  missing?
 
  Thanks in advance,
  -Brent
  ===
  I have a home project with 3x250 GB+3x300 GB in raidz, so I expect to
  lose 1x300 GB to parity.
 
  Total size:1650GB
  Total size using 1024 to measure: ~1534 GB
 
  Expected raidz zpool size after losing 300 GB to parity: ~1350 GB
  Expected raidz zpool size using 1024 to measure: ~1255.5 GB
 
  Actual zpool size: 1.36T
 
  Single zfs on the pool - available size: 1.11T
 
  I realize zfs is going to have some overhead but 250 GB seems a little
  excessive...right? I thought maybe the zpool was showing all 6 disks and
  the filesystem reflected the remaining space after discounting the
  parity disk but that doesn't add up in a way that makes sense either
  (see above). Can someone help explain these numbers?
 
  Thanks,
  -Brent
 
 
 
 When you say 3x250 GB+3x300 GB in raidz do you mean:
 
 1) # zpool create mypool raidz 250gb-1 250gb-2 250gb-3 300gb-1 \
  300gb-2 300gb-3
 
 or
 
 2) # zpool create mypool raidz 250gb-1 250gb-2 250gb-3 \
  raidz 300gb-1 300gb-2 300gb-3
 
 As I understand it, #1 would waste the extra 50gb on each 300gb drive and give
 you 1500gb usable space. 250gb of that (1/6th) would be parity, so 1250gb
 data space.
 
 #2 would make 2 vdevs of 750gb and 900gb totaling 1650gb space. Parity would
 use 250gb from the 1st vdev and 300gb from the second; so 1100gb of data
 space is available.
 
 Either way, when you list raidz* pools with
  # zpool list
 you see the total physical space. When you list the filesystems with
  # zfs list
 you get the usable filesystem space, which is where the parity is implemented.
 
 Here's an example with 250MB files and 300MB files:
 
 For #1 scenario:
 
 # zpool create -f mypool1 raidz /250d1 /250d2 /250d3 /300d1 /300d2 /300d3
 
 # zpool list
 NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
 mypool11.44G145K   1.44G 0%  ONLINE -
 
 # zfs list
 NAME  USED  AVAIL  REFER  MOUNTPOINT
 mypool1   115K  1.16G  40.7K  /mypool1
 
 # zpool status
   pool: mypool1
  state: ONLINE
  scrub: none requested
 config:
 
 NAMESTATE READ WRITE CKSUM
 mypool1 ONLINE   0 0 0
   raidz1ONLINE   0 0 0
 /250d1  ONLINE   0 0 0
 /250d2  ONLINE   0 0 0
 /250d3  ONLINE   0 0 0
 /300d1  ONLINE   0 0 0
 /300d2  ONLINE   0 0 0
 /300d3  ONLINE   0 0 0
 
 --
 And for #2:
 # zpool create -f mypool2 raidz /250d1 /250d2 /250d3 raidz /300d1 /300d2 
 /300d3
 
 # zpool list
 NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
 mypool2 

[zfs-discuss] rename(2), atomicity, crashes and fsync()

2009-03-17 Thread James Andrewartha
Hi all,

Recently there's been discussion [1] in the Linux community about how
filesystems should deal with rename(2), particularly in the case of a crash.
ext4 was found to truncate files after a crash, that had been written with
open(foo.tmp), write(), close() and then rename(foo.tmp, foo). This is
 because ext4 uses delayed allocation and may not write the contents to disk
immediately, but commits metadata changes quite frequently. So when
rename(foo.tmp,foo) is committed to disk, it has a length of zero which
is later updated when the data is written to disk. This means after a crash,
foo is zero-length, and both the new and the old data has been lost, which
is undesirable. This doesn't happen when using ext3's default settings
because ext3 writes data to disk before metadata (which has performance
problems, see Firefox 3 and fsync[2])

Ted T'so's (the main author of ext3 and ext4) response is that applications
which perform open(),write(),close(),rename() in the expectation that they
will either get the old data or the new data, but not no data at all, are
broken, and instead should call open(),write(),fsync(),close(),rename().
Most other people are arguing that POSIX says rename(2) is atomic, and while
POSIX doesn't specify crash recovery, returning no data at all after a crash
is clearly wrong, and excessive use of fsync is overkill and
counter-productive (Ted later proposes a yes-I-really-mean-it flag for
fsync). I've omitted a lot of detail, but I think this is the core of the
argument.

Now the question I have, is how does ZFS deal with
open(),write(),close(),rename() in the case of a crash? Will it always
return the new data or the old data, or will it sometimes return no data? Is
 returning no data defensible, either under POSIX or common sense? Comments
about other filesystems, eg UFS are also welcome. As a counter-point, XFS
(written by SGI) is notorious for data-loss after a crash, but its authors
defend the behaviour as POSIX-compliant.

Note this is purely a technical discussion - I'm not interested in replies
saying ?FS is a better filesystem in general, or on GPL vs CDDL licensing.

[1] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/317781?comments=all
http://thunk.org/tytso/blog/2009/03/12/delayed-allocation-and-the-zero-length-file-problem/
http://lwn.net/Articles/323169/
http://mjg59.livejournal.com/108257.html http://lwn.net/Articles/323464/
http://thunk.org/tytso/blog/2009/03/15/dont-fear-the-fsync/
http://lwn.net/Articles/323752/ *
http://lwn.net/Articles/322823/ *
* are currently subscriber-only, email me for a free link if you'd like to
read them
[2] http://lwn.net/Articles/283745/

-- 
James Andrewartha | Sysadmin
Data Analysis Australia Pty Ltd | STRATEGIC INFORMATION CONSULTANTS
97 Broadway, Nedlands, Western Australia, 6009
PO Box 3258, Broadway Nedlands, WA, 6009
T: +61 8 9386 3304 | F: +61 8 9386 3202 | I: http://www.daa.com.au
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss