Re: [zfs-discuss] future of OpenSolaris

2010-02-22 Thread Michael Ramchand
I think Oracle have been quite clear about their plans for OpenSolaris. 
They have publicly said they plan to continue to support it and the 
community.


They're just a little distracted right now because they are in the 
process of on-boarding many thousand Sun employees, and trying to get 
them feeling happy, comfortable and at home in their new surroundings so 
that they can start making money again.


The silence means that you're in a queue and they forgot to turn the 
hold music on. Have patience. :-)


On 02/22/10 09:22, Eugen Leitl wrote:

Oracle's silence is starting to become a bit ominous. What are
the future options for zfs, should OpenSolaris be left dead
in the water by Suracle? I have no insight into who core
zfs developers are (have any been fired by Sun even prior to
the merger?), and who's paying them. Assuming a worst case
scenario, what would be the best candidate for a fork? Nexenta?
Debian already included FreeBSD as a kernel flavor into its
fold, it seems Nexenta could be also a good candidate.

Maybe anyone in the know could provide a short blurb on what
the state is, and what the options are.

   





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Size discrepancy (beyond expected amount?)

2009-03-20 Thread Michael Ramchand
You also DON'T want to give a single disk to your rpool. ZFS really 
needs to be able to fix errors when it finds them.


Suggest you read the ZFS Best Practices Guide (again). 
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pools


Mike

Tomas Ă–gren wrote:

On 19 March, 2009 - Harry Putnam sent me these 1,4K bytes:

  

I'm finally getting close to the setup I wanted, after quite a bit of
experimentation and bugging these lists endlessly.

So first, thanks for your tolerance and patience.

My setup consists of 4 disks.  One holds the OS (rpool) and 3 more all
the same model and brand, all 500gb.

I've created a zpool in raidz1 configuration with:

  zpool create  zbk raidz1  c3d0 c4d0 c4d1

No errors showed up and zpool status shows no problems with those
three:
   pool: zbk
  state: ONLINE
  scrub: none requested
 config:

NAMESTATE READ WRITE CKSUM
zbk ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c3d0ONLINE   0 0 0
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0


However, I appear to have lost an awfull lot of space... even above
what I expercted.

  df -h
[...]
  zbk   913G   26K  913G   1% /zbk

It appears something like 1 entire disk is gobbled up by raidz1.

The same disks configured in zpool with no raidz1  shows  1.4tb with df.

I was under the impression raidz1 would take something like 20%.. but
this is more like 33.33%.

So, is this to be expected or is something wrong here?



Not a percentage at all.. raidz1 takes 1 disk. raidz2 takes 2 disks.
This is to be able to handle 1 vs 2 any-disk failures.

Then there's the 1000 vs 1024 factor as well. Your HD manufacturer says
500GB, the rest of the computer industry says ~465..

/Tomas
  





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk usage

2009-03-17 Thread Michael Ramchand

Grant Lowe wrote:

Hey all,

I have a question/puzzle with zfs.  See the following:

bash-3.00# df -h | grep d25 ; zfs list | grep d25

FILESYSTEM SIZE   USED  AVAIL CAPACITY MOUNTED ON
r12_data/d25   *659G*40G*63G*39%/opt/d25/oakwc12
df -h says the d25 file system is 659GB?; 40GB used and 63GB available?
r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12

NAMEUSED AVAIL REFER MOUNTPOINT
r12_data/d25  760K  *62.7G*  39.9G  /opt/d25/oakwc12
zfs list says the db25 file system has 63GB available?
r12_data/d24 39.9G  2.14G  39.9G  /opt/d24/oakwcr12


Shouldn't the new filesystem (d25) size be what the clone was allocated?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

Hi Grant,

We'd need more info than that to figure what's actually going on.

Is d25 a clone of something? If so what? Can we see the specs of that as 
well.


Does d25 have any reservations or a quota?

What does zfs list of r12_data show?

Do you have snapshots? zfs list -t all will show you them.

Finally, the clone will only be the size of it's delta from the source.

HTH


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk usage

2009-03-17 Thread Michael Ramchand

Well, it is kinda confusing...

In short, df -h will always return the size of the WHOLE pool for size 
(unless you've set a quota on the dataset in which case it says that), 
the amount of space that particular dataset is using for used, and the 
total amount of free space on the WHOLE pool for avail (unless you've 
got a quota or reservation set).


So in your original mail. 695G is the size of the DATA on the r12_data 
pool.  zfs list r12_data says you are using 596G. I think this is the 
RAW capacity used, and I reckon you are using compression. (zfs get 
compressratio r12_data will give you something like 1.1).


However, df -h of r12_data/d24 should have the identical 1st and 3rd 
fields, but they don't. (Could you re-run?)


Same goes for the zfs list commands.

Could you try doing zfs list -o space to get a fuller breakdown of how 
the space is being used.


Mike



Grant Lowe wrote:

Hi Mike,

Yes, d25 is a clone of d24. Here are some data points about it:

bash-3.00# zfs get reservation r12_data/d25
NAME  PROPERTY VALUE SOURCE
r12_data/d25  reservation  none  default
bash-3.00# zfs get quota r12_data/d25
NAME  PROPERTY  VALUE SOURCE
r12_data/d25  quota none  default
bash-3.00#
bash-3.00# zfs list r12_data
NAME   USED  AVAIL  REFER  MOUNTPOINT
r12_data   596G  62.7G  24.5K  none
bash-3.00#
bash-3.00# zfs list -t snapshot r12_data/d...@a
NAME USED  AVAIL  REFER  MOUNTPOINT
r12_data/d...@a   904K  -  39.9G  -
bash-3.00#

Thanks for the response.  Did you need any more data points from me?



- Original Message 
From: Michael Ramchand mich...@ramchand.net
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 12:40:53 AM
Subject: Re: [zfs-discuss] Disk usage

Grant Lowe wrote:
  

Hey all,

I have a question/puzzle with zfs.  See the following:

bash-3.00# df -h | grep d25 ; zfs list | grep d25

FILESYSTEM SIZE   USED  AVAIL CAPACITY MOUNTED ON
r12_data/d25   *659G*40G*63G*39%/opt/d25/oakwc12
df -h says the d25 file system is 659GB?; 40GB used and 63GB available?
r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12

NAMEUSED AVAIL REFER MOUNTPOINT
r12_data/d25  760K  *62.7G*  39.9G  /opt/d25/oakwc12
zfs list says the db25 file system has 63GB available?
r12_data/d24 39.9G  2.14G  39.9G  /opt/d24/oakwcr12


Shouldn't the new filesystem (d25) size be what the clone was allocated?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 


Hi Grant,

We'd need more info than that to figure what's actually going on.

Is d25 a clone of something? If so what? Can we see the specs of that as 
well.


Does d25 have any reservations or a quota?

What does zfs list of r12_data show?

Do you have snapshots? zfs list -t all will show you them.

Finally, the clone will only be the size of it's delta from the source.

HTH

  




smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk usage

2009-03-17 Thread Michael Ramchand

Sorry, no , I assume you are on Sol 10.

othe value space to  display  space  usage
 properties  on  file  systems  and volumes.
 Thisisashortcut for -o
 name,avail,used,usedsnap,usedds,
 usedrefreserv,usedchild  -t
 filesystem,volume.


Grant Lowe wrote:

If you meant available, here's the output of that:

bash-3.00# zfs list -o available r12_data
AVAIL
62.7G
bash-3.00# zfs list -o available r12_data/d24
AVAIL
2.14G
bash-3.00# zfs list -o available r12_data/d25
AVAIL
62.7G
bash-3.00#




- Original Message 
From: Michael Ramchand mich...@ramchand.net
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 8:32:49 AM
Subject: Re: [zfs-discuss] Disk usage

Well, it is kinda confusing...

In short, df -h will always return the size of the WHOLE pool for size 
(unless you've set a quota on the dataset in which case it says that), 
the amount of space that particular dataset is using for used, and the 
total amount of free space on the WHOLE pool for avail (unless you've 
got a quota or reservation set).


So in your original mail. 695G is the size of the DATA on the r12_data 
pool.  zfs list r12_data says you are using 596G. I think this is the 
RAW capacity used, and I reckon you are using compression. (zfs get 
compressratio r12_data will give you something like 1.1).


However, df -h of r12_data/d24 should have the identical 1st and 3rd 
fields, but they don't. (Could you re-run?)


Same goes for the zfs list commands.

Could you try doing zfs list -o space to get a fuller breakdown of how 
the space is being used.


Mike



Grant Lowe wrote:
  

Hi Mike,

Yes, d25 is a clone of d24. Here are some data points about it:

bash-3.00# zfs get reservation r12_data/d25
NAME  PROPERTY VALUE SOURCE
r12_data/d25  reservation  none  default
bash-3.00# zfs get quota r12_data/d25
NAME  PROPERTY  VALUE SOURCE
r12_data/d25  quota none  default
bash-3.00#
bash-3.00# zfs list r12_data
NAME   USED  AVAIL  REFER  MOUNTPOINT
r12_data   596G  62.7G  24.5K  none
bash-3.00#
bash-3.00# zfs list -t snapshot r12_data/d...@a
NAME USED  AVAIL  REFER  MOUNTPOINT
r12_data/d...@a   904K  -  39.9G  -
bash-3.00#

Thanks for the response.  Did you need any more data points from me?



- Original Message 
From: Michael Ramchand mich...@ramchand.net
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 12:40:53 AM
Subject: Re: [zfs-discuss] Disk usage

Grant Lowe wrote:
 


Hey all,

I have a question/puzzle with zfs.  See the following:

bash-3.00# df -h | grep d25 ; zfs list | grep d25

FILESYSTEM SIZE   USED  AVAIL CAPACITY MOUNTED ON
r12_data/d25   *659G*40G*63G*39%/opt/d25/oakwc12
df -h says the d25 file system is 659GB?; 40GB used and 63GB available?
r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12

NAMEUSED AVAIL REFER MOUNTPOINT
r12_data/d25  760K  *62.7G*  39.9G  /opt/d25/oakwc12
zfs list says the db25 file system has 63GB available?
r12_data/d24 39.9G  2.14G  39.9G  /opt/d24/oakwcr12


Shouldn't the new filesystem (d25) size be what the clone was allocated?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
   
  

Hi Grant,

We'd need more info than that to figure what's actually going on.

Is d25 a clone of something? If so what? Can we see the specs of that as 
well.


Does d25 have any reservations or a quota?

What does zfs list of r12_data show?

Do you have snapshots? zfs list -t all will show you them.

Finally, the clone will only be the size of it's delta from the source.

HTH

 





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss