Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Chris Ridd


On 28 Feb 2009, at 07:26, C. Bergström wrote:


Blake wrote:

Gnome GUI for desktop ZFS administration

With the libzfs java bindings I am plotting a web based interface..  
I'm not sure if that would meet this gnome requirement though..   
Knowing specifically what you'd want to do in that interface would  
be good.. I planned to compare it to fishworks and the nexenta  
appliance as a base..


Recent builds of OpenSolaris come with SWT from the Eclipse project,  
which makes it possible for Java apps to use real GNOME/GTK native  
UIs. So your libzfs bindings may well be useful with that.


Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Bryan Allen
I for one would like an interactive attribute for zpools and
filesystems, specifically for destroy.

The existing behavior (no prompt) could be the default, but all
filesystems would inherit from the zpool's attrib. so I'd only
need to set interactive=on for the pool itself, not for each
filesystem.

I have yet (in almost two years of using ZFS) to bone myself by
accidentally destroying tank/worthmorethanyourjob, but it's only
a matter of time, regardless of how careful I am.

The argument rm vs zfs destroy doesn't hold much water to me. I
don't use rm -i, but destroying a single file or a hierarchy of
directories is somewhat different than destroying a filesytem or
entire pool. At least to my mind.

As such, consider it a piece of mind feature.
-- 
bda
Cyberpunk is dead.  Long live cyberpunk.
http://mirrorshades.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Details on raidz boot + zfs patents?

2009-02-28 Thread Mike Gerdts
On Sat, Feb 28, 2009 at 4:53 AM, C. Bergström
cbergst...@netsyncro.com wrote:
 The other question that I am less worried about is would this violate any
 patents.. I mean.. Sun added the initial zfs support to grub and this is
 essentially extending that, but I'm not aware of any patent provisions on
 that code or some royalty free statement about ZFS related patents from
 Sun.. (Frankly.. I look at Sun as /similar/ to Cononical in that I assume
 they only sue to protect themselves and not go after any good intention foss
 project..)

See http://opensolaris.org/os/about/faq/licensing_faq/#patents.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can VirtualBox run a 64 bit guests on 32 bit host

2009-02-28 Thread Blake Irvin

Check out http://www.sun.com/bigadmin/hcl/data/os


Sent from my iPhone

On Feb 28, 2009, at 2:20 AM, Harry Putnam rea...@newsguy.com wrote:


Brian Hechinger wo...@4amlunch.net writes:

[...]


I think it would be better to answer this question that it would to
attempt to answer the VirtualBox question (I run it on a 64-bit OS,
so I can't really answer that anyway).


Thanks yes and appreciated here


The benefit to running ZFS on a 64-bit OS is if you have a large
amount of RAM.  I don't know what the breaking point is, but I can
definitely tell you that a 32-bit kernel and 4GB ram doesn't mix
well.  If all you are doing is testing ZFS on VMs you probably
aren't all that worried about performance so it really shouldn't be
an issue for you to run 32-bit.  I'd say keep your RAM allocations
down, and I wish I knew what to tell you to keep it under.
Hopefully someone who has a better grasp of all that can chime in.

Once you put it on real hardware, however, you really want a 64-bit
CPU and as much RAM as you can toss at the machine.



Sounds sensible, thanks for common sense input.

Just the little I've tinkered with zfs so far I'm in love already. zfs
is much more responive to some kinds of things I'm used to waiting for
on linux reiserfs.

Commands like du, mv, rm etc on hefty amounts of data are always slow
as molasses on linux/reiserfs (and reiserfs is faster than ext3).  I
have'nt tried ext4 but have been told it is no faster.

Whereas zfs gets those jobs done in short order... very noticably
faster but I am just going by feel but at least on very similar
hardware (cpu wise). (The linux is on Intel 3.06 celeron 2gb ram)

I guess there is something called btrfs (nicknamed butter fs) that is
supposed to be linux answer to zfs but it isn't ready for primetime
yet and I can say it will have a ways to go to compare to zfs.

My usage and skill level is probably the lowest on this list easily
but even I see some real nice features with zfs.  It seams taylor made
for semi-ambitious home NAS.

So Brian, If you can bear with my windyness a bit more, one of the
things flopping around in the back of my mind is something already
mentioned here too.. change out the mobo instead of dinking around
with addon pci sata controller..

I have 64 bit hardware... but am a bit scared of having lots of
trouble getting opensol to run peacefully on it.  Its a (somewhat old
fashioned now) athlon64 2.2 ghz +3400/Aopen AK86-L mobo. (socket 754)

The little jave tool that tests the hardware says my sata controller
wont work (the testing tool saw it as a VIA raid controller) and
suggests I turn off RAID in the bios.

After a carefull look in the bios menus I'm not finding any way to
turn it off so guessing the sata ports will be useless unless I
install a pci addon sata controller.

So thinking of justs changing out the mobo for something with stuff
that is known to work.

The machine came with an Asus mobo that I ruined myself by dicking
aournd installing RAM... somehow shorted out something, then mobo
became useless.

But I'm thinking of turning to Asus again and making sure there is
onboard SATA with at least 4 prts  and preferebly 6.

So cutting to the chase here... would you happen to have a
recommendation from your own experience, or something you've heard
will work and that can stand more ram... my current setup tops out at
3gb.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Blake Irvin

Shrinking pools would also solve the right-sizing dilemma.

Sent from my iPhone

On Feb 28, 2009, at 3:37 AM, Joe Esposito j...@j-espo.com wrote:


I'm using opensolaris and zfs at my house for my photography storage
as well as for an offsite backup location for my employer and several
side web projects.

I have an 80g drive as my root drive.  I recently took posesion of 2
74g 10k drives which I'd love to add as a mirror to replace the 80 g
drive.

From what I gather it is only possible if I zfs export my storage
array and reinstall solaris on the new disks.

So I guess I'm hoping zfs shrink and grow commands show up sooner or  
later.


Just a data point.

Joe Esposito
www.j-espo.com

On 2/28/09, C. Bergström cbergst...@netsyncro.com wrote:

Blake wrote:

Gnome GUI for desktop ZFS administration



On Fri, Feb 27, 2009 at 9:13 PM, Blake blake.ir...@gmail.com  
wrote:



zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)



I'd like to see:

pool-shrinking (and an option to shrink disk A when i want disk B  
to

become a mirror, but A is a few blocks bigger)

This may be interesting... I'm not sure how often you need to  
shrink a
pool though?  Could this be classified more as a Home or SME level  
feature?

install to mirror from the liveCD gui


I'm not working on OpenSolaris at all, but for when my projects
installer is more ready /we/ can certainly do this..

zfs recovery tools (sometimes bad things happen)

Agreed.. part of what I think keeps zfs so stable though is the  
complete
lack of dependence on any recovery tools..  It forces customers to  
bring

up the issue instead of dirty hack and nobody knows.

automated installgrub when mirroring an rpool


This goes back to an installer option?

./C

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Mike Gerdts
On Sat, Feb 28, 2009 at 1:20 AM, Richard Elling
richard.ell...@gmail.com wrote:
 David Magda wrote:
 On Feb 27, 2009, at 20:02, Richard Elling wrote:
 At the risk of repeating the Best Practices Guide (again):
 The zfs send and receive commands do not provide an enterprise-level
 backup solution.

 Yes, in its current state; hopefully that will change some point in the
 future (which is what we're talking about with GSoC--the potential to change
 the status quo).

 I suppose, but considering that enterprise backup solutions exist,
 and some are open source, why reinvent the wheel?
 -- richard

The default mode of operation for every enterprise backup tool that I
have used is file level backups.  The determination of which files
need to be backed up seems to be to crawl the file system looking for
files that have an mtime after the previous backup.

Areas of strength for such tools include:

- Works with any file system that provides a POSIX interface
- Restore of a full backup is an accurate representation of the data backed up
- Restore can happen to a different file system type
- Restoring an individual file is possible

Areas of weakness include:

- Extremely inefficient for file systems with lots of files and little change.
- Restore of full + incremental tends to have extra files because of
spotty support or performance overhead of tool that would prevent it.
- Large files that have blocks rewritten get backed up in full each time
- Restores of file systems with lots of small files (especially in one
directory) are extremely slow

There exist features (sometimes expensive add-ons) that deal with some
of these shortcomings via:

- Keeping track of deleted files so that a restore is more
representative of what is on disk during the incremental backup.
Administration manuals typically warn that this has a big performance
and/or size overhead on the database used by the backup software.
- Including add-ons that hook into other components (e.g. VxFS storage
checkpoints, Oracle RMAN) that provide something similar to
block-level incremental backups

Why re-invent the wheel?

- People are more likely to have snapshots available for file-level
restores, and as such a zfs send data stream would only be used in
the event of a complete pool loss.
- It is possible to provide a general block-level backup solution so
that every product doesn't have to invent it.  This gives ZFS another
feature benefit to put it higher in the procurement priority.
- File creation slowness can likely be avoided allowing restore to
happen at tape speed
- To be competitive with NetApp snapmirror to tape
- Even having a zfs(1M) option that could list the files that change
between snapshots could be very helpful to prevent file system crawls
and to avoid being fooled by bogus mtimes.


-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Casper . Dik

I'm using opensolaris and zfs at my house for my photography storage
as well as for an offsite backup location for my employer and several
side web projects.

I have an 80g drive as my root drive.  I recently took posesion of 2
74g 10k drives which I'd love to add as a mirror to replace the 80 g
drive.

Why do you want to use a small 10K rpm disk?

A modern 1TB disk at 5400/7200 rpm (at $100) will put it to shame.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Details on raidz boot + zfs patents?

2009-02-28 Thread C. Bergström

Mike Gerdts wrote:

On Sat, Feb 28, 2009 at 4:53 AM, C. Bergström
cbergst...@netsyncro.com wrote:
  

The other question that I am less worried about is would this violate any
patents.. I mean.. Sun added the initial zfs support to grub and this is
essentially extending that, but I'm not aware of any patent provisions on
that code or some royalty free statement about ZFS related patents from
Sun.. (Frankly.. I look at Sun as /similar/ to Cononical in that I assume
they only sue to protect themselves and not go after any good intention foss
project..)



See http://opensolaris.org/os/about/faq/licensing_faq/#patents.
  
Sun has contributed zfs code to their grub fork, but it's not under the 
CDDL.  So this doesn't apply.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] At Wits End for ZFS Permission Settings

2009-02-28 Thread Steven Sim

All;

I do apologize for making this query in this list. But I am at my wits end.

I have a directory like so

$ ls -l
total 47
drwxr-xr-x  19 adminadmin 23 Feb 27 17:52 Named
drw-r-  74 adminadmin556 Feb 25 03:46 Not Sorted --- 
Directory in Question


$ ls -dv Not Sorted
drw-r-  74 adminadmin556 Feb 25 03:46 Not Sorted
0:owner@:execute:deny
1:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
/append_data/write_xattr/write_attributes/write_acl/write_owner
:allow
2:group@:add_file/write_data/add_subdirectory/append_data/execute:deny
3:group@:list_directory/read_data:allow
4:everyone@:list_directory/read_data/add_file/write_data
/add_subdirectory/append_data/write_xattr/execute/write_attributes
/write_acl/write_owner:deny
5:everyone@:read_xattr/read_attributes/read_acl/synchronize:allow

But I cannot access the directory Not Sorted  as user admin  AT ALL.

I changed my root path to ensure that chmod points to the chmod in 
/usr/bin as opposed to /usr/gnu/bin


(sorry, but i really think that placing the GNU chmod first in the 
default root path is a real dum idea)


I then did (as root)

#chmod -R A- Not Sorted

in an attempt to remove all ACL.

Didn't work.

I tried setting the entire ACL manually via (again as root)

#chmod -R A=owner@:read_data/write_data:allow,group@:read_data:allow 
Not Sorted


drw-r-  74 adminadmin556 Feb 25 03:46 Not Sorted --- 
Directory in Question


Didn't work either. User admin is still unable to enter.

Again as root

#chmod -R A=owner@:read_data/write_data:allow,group@:read_data:allow 
Not Sorted


#ls -dv Not Sorted
drw-r-+ 74 adminadmin556 Feb 25 03:46 Not Sorted
0:user:admin:list_directory/read_data/add_file/write_data:allow
1:group@:list_directory/read_data:allow
2:owner@:execute:deny
3:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
/append_data/write_xattr/write_attributes/write_acl/write_owner
:allow
4:group@:add_file/write_data/add_subdirectory/append_data/execute:deny
5:group@:list_directory/read_data:allow
6:everyone@:list_directory/read_data/add_file/write_data
/add_subdirectory/append_data/write_xattr/execute/write_attributes
/write_acl/write_owner:deny
7:everyone@:read_xattr/read_attributes/read_acl/synchronize:allow

User admin STILL cannot go in!

What gives?

Warmest Regards
Steven Sim



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Details on raidz boot + zfs patents?

2009-02-28 Thread Joerg Schilling
C. Bergström cbergst...@netsyncro.com wrote:

  See http://opensolaris.org/os/about/faq/licensing_faq/#patents.

 Sun has contributed zfs code to their grub fork, but it's not under the 
 CDDL.  So this doesn't apply.

Under GPLv2 you may only cntribute code where your patents apply if you grant
royal free usage. BTW: this is why the FreeDB (CDDB) code is still free.


Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] At Wits End for ZFS Permission Settings

2009-02-28 Thread Cindy Swearingen
Hi Steven,

I don't have access to my usual resources to test the ACL syntax but
I think the root cause is that you don't have execute permission
on the Not Started directory. 

Try the chmod syntax again but this time include execute:allow for
admin on Not Sorted or add it like this:

# chmod A+user:admin:execute:allow Not Sorted

See chmod.1 for more info.

Cindy

- Original Message -
From: Steven Sim unixan...@gmail.com
Date: Saturday, February 28, 2009 9:26 am
Subject: [zfs-discuss] At Wits End for ZFS Permission Settings
To: zfs-discuss@opensolaris.org

 All;
 
 I do apologize for making this query in this list. But I am at my wits 
 end.
 
 I have a directory like so
 
 $ ls -l
 total 47
 drwxr-xr-x  19 adminadmin 23 Feb 27 17:52 Named
 drw-r-  74 adminadmin556 Feb 25 03:46 Not Sorted --- 
 Directory in Question
 
 $ ls -dv Not Sorted
 drw-r-  74 adminadmin556 Feb 25 03:46 Not Sorted
 0:owner@:execute:deny
 1:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
 /append_data/write_xattr/write_attributes/write_acl/write_owner
 :allow
 2:group@:add_file/write_data/add_subdirectory/append_data/execute:deny
 3:group@:list_directory/read_data:allow
 4:everyone@:list_directory/read_data/add_file/write_data
 /add_subdirectory/append_data/write_xattr/execute/write_attributes
 /write_acl/write_owner:deny
 5:everyone@:read_xattr/read_attributes/read_acl/synchronize:allow
 
 But I cannot access the directory Not Sorted  as user admin  AT ALL.
 
 I changed my root path to ensure that chmod points to the chmod in 
 /usr/bin as opposed to /usr/gnu/bin
 
 (sorry, but i really think that placing the GNU chmod first in the 
 default root path is a real dum idea)
 
 I then did (as root)
 
 #chmod -R A- Not Sorted
 
 in an attempt to remove all ACL.
 
 Didn't work.
 
 I tried setting the entire ACL manually via (again as root)
 
 #chmod -R A=owner@:read_data/write_data:allow,group@:read_data:allow 
 Not Sorted
 
 drw-r-  74 adminadmin556 Feb 25 03:46 Not Sorted --- 
 Directory in Question
 
 Didn't work either. User admin is still unable to enter.
 
 Again as root
 
 #chmod -R A=owner@:read_data/write_data:allow,group@:read_data:allow 
 Not Sorted
 
 #ls -dv Not Sorted
 drw-r-+ 74 adminadmin556 Feb 25 03:46 Not Sorted
 0:user:admin:list_directory/read_data/add_file/write_data:allow
 1:group@:list_directory/read_data:allow
 2:owner@:execute:deny
 3:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
 /append_data/write_xattr/write_attributes/write_acl/write_owner
 :allow
 4:group@:add_file/write_data/add_subdirectory/append_data/execute:deny
 5:group@:list_directory/read_data:allow
 6:everyone@:list_directory/read_data/add_file/write_data
 /add_subdirectory/append_data/write_xattr/execute/write_attributes
 /write_acl/write_owner:deny
 7:everyone@:read_xattr/read_attributes/read_acl/synchronize:allow
 
 User admin STILL cannot go in!
 
 What gives?
 
 Warmest Regards
 Steven Sim
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Joe Esposito
On Sat, Feb 28, 2009 at 8:31 AM, casper@sun.com wrote:


 I'm using opensolaris and zfs at my house for my photography storage
 as well as for an offsite backup location for my employer and several
 side web projects.
 
 I have an 80g drive as my root drive.  I recently took posesion of 2
 74g 10k drives which I'd love to add as a mirror to replace the 80 g
 drive.

 Why do you want to use a small 10K rpm disk?

 A modern 1TB disk at 5400/7200 rpm (at $100) will put it to shame.

 Casper


fair enough.  I just have a pair of these sitting here from a pull at work.
 The data array is currently 4x1TB with another hot swap bay ready for 4x???
when the need arises.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Tim
On Sat, Feb 28, 2009 at 8:28 AM, Joe Esposito j...@j-espo.com wrote:



 On Sat, Feb 28, 2009 at 8:31 AM, casper@sun.com wrote:


 I'm using opensolaris and zfs at my house for my photography storage
 as well as for an offsite backup location for my employer and several
 side web projects.
 
 I have an 80g drive as my root drive.  I recently took posesion of 2
 74g 10k drives which I'd love to add as a mirror to replace the 80 g
 drive.

 Why do you want to use a small 10K rpm disk?

 A modern 1TB disk at 5400/7200 rpm (at $100) will put it to shame.

 Casper


 fair enough.  I just have a pair of these sitting here from a pull at work.
  The data array is currently 4x1TB with another hot swap bay ready for 4x???
 when the need arises.


That's not entirely true.  Maybe it will put it to shame at streaming
sequential I/O.  The 10k drives will still wipe the floor with any modern
7200rpm drive for random IO and seek times.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Bob Friesenhahn

On Sat, 28 Feb 2009, Tim wrote:


That's not entirely true.  Maybe it will put it to shame at streaming
sequential I/O.  The 10k drives will still wipe the floor with any modern
7200rpm drive for random IO and seek times.


Or perhaps streaming sequential I/O will have similar performance, 
with much better performance for random IO and seek times.  It is 
always best to consult the vendor spec sheet.


Regardless, it is much easier to update with the same size, or a 
larger size drive.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Thomas Wagner
  pool-shrinking (and an option to shrink disk A when i want disk B to
  become a mirror, but A is a few blocks bigger)
  This may be interesting... I'm not sure how often you need to shrink a pool 
  though?  Could this be classified more as a Home or SME level feature?

Enterprise level especially in SAN environments need this.

Projects own theyr own pools and constantly grow and *shrink* space.
And they have no downtime available for that.

give a +1 if you agree

Thomas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Thomas Wagner
I would really add : make insane zfs destroy -r| poolname  as harmless as 
zpool destroy poolname (recoverable)

  zfs destroy -r| poolname|/filesystem

  this should behave like that:

  o snapshot the filesystem to be deleted (each, name it 
@deletedby_operatorname_date)

  o hide the snapshot as long as the pool has enough space and
 property snapshotbeforedelete=on (default off) is set 'on'

  o free space by removing those snapshots no earlier then configured
in a inheritable pool/filesystem property snapshotbeforedeleteremoval=3days
(=0 preserve forever, 30min preserve for 30 minutes, ...)


  o prevent deletion of a pool or filesystem if at least one 
snapshot from the above save actions exists down the tree

  o purging of snapshots would be done by 


To be honest, I don't want a discussion like the rm -rf is one.
In front of the keyboard or inside scripts we are all humans with
all theyr mistakes. In opposite to the rm -rf, the ZFS Design 
should take this extension without major changes. It should be 
a generic rule of dump to implement safety if it is possible 
at resonable low cost.

I think the full range of users, Enterprise to Home will appreciate
that theyr multi-million-$$-business/home_data does not go down 
accidentially with the interactive=on (Bryan) or the the idea 
written here. This in case someone makes an error and all the 
data could still be there (!)...ZFS should protect the user as well
and not only look at the hardware redundancy.

Thomas

PS: think of the day where simple operator $NAME makes a typo
zfs destroy -r poolname and all the data still sits on the
disk. But no one is able to bring that valueable data back,
except restoration from tape with hours of downtime.
Sorry for repeating that, it hurts so much to not having
this feature.

On Sat, Feb 28, 2009 at 04:35:05AM -0500, Bryan Allen wrote:
 I for one would like an interactive attribute for zpools and
 filesystems, specifically for destroy.
 
 The existing behavior (no prompt) could be the default, but all
 filesystems would inherit from the zpool's attrib. so I'd only
 need to set interactive=on for the pool itself, not for each
 filesystem.
 
 I have yet (in almost two years of using ZFS) to bone myself by
 accidentally destroying tank/worthmorethanyourjob, but it's only
 a matter of time, regardless of how careful I am.
 
 The argument rm vs zfs destroy doesn't hold much water to me. I
 don't use rm -i, but destroying a single file or a hierarchy of
 directories is somewhat different than destroying a filesytem or
 entire pool. At least to my mind.
 
 As such, consider it a piece of mind feature.
 -- 
 bda
 Cyberpunk is dead.  Long live cyberpunk.
 http://mirrorshades.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

-- 
Thomas Wagner
+49-171-6135989  http://www.wagner-net.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Nicolas Williams
On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote:
   pool-shrinking (and an option to shrink disk A when i want disk B to
   become a mirror, but A is a few blocks bigger)
   This may be interesting... I'm not sure how often you need to shrink a 
  pool 
   though?  Could this be classified more as a Home or SME level feature?
 
 Enterprise level especially in SAN environments need this.
 
 Projects own theyr own pools and constantly grow and *shrink* space.
 And they have no downtime available for that.

Multiple pools on one server only makes sense if you are going to have
different RAS for each pool for business reasons.  It's a lot easier to
have a single pool though.  I recommend it.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Mike Gerdts
On Sat, Feb 28, 2009 at 4:33 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
 On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote:
   pool-shrinking (and an option to shrink disk A when i want disk B to
   become a mirror, but A is a few blocks bigger)
   This may be interesting... I'm not sure how often you need to shrink a 
  pool
   though?  Could this be classified more as a Home or SME level feature?

 Enterprise level especially in SAN environments need this.

 Projects own theyr own pools and constantly grow and *shrink* space.
 And they have no downtime available for that.

 Multiple pools on one server only makes sense if you are going to have
 different RAS for each pool for business reasons.  It's a lot easier to
 have a single pool though.  I recommend it.

Other scenarios for multiple pools include:

- Need independent portability of data between servers.  For example,
in a HA cluster environment, various workloads will be mapped to
various pools.  Since ZFS does not do active-active clustering, a
single pool for anything other than a simple active-standby cluster is
not useful.

- Array based copies are needed.  There are times when copies of data
are performed at a storage array level to allow testing and support
operations to happen on different spindles.  For example, in a
consolidated database environment, each database may be constrained to
a set of spindles so that each database can be replicated or copied
independent of the various others.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Aaron Blew
Absolutely agree. I'l love to be able to free up some LUNs that I
don't need in the pool any more.

Also, concatenation of devices in a zpool would be great for devices
that have LUN limits.  It also seems like it may be an easy thing to
implement.

-Aaron

On 2/28/09, Thomas Wagner thomas.wag...@gmx.net wrote:
  pool-shrinking (and an option to shrink disk A when i want disk B to
  become a mirror, but A is a few blocks bigger)
  This may be interesting... I'm not sure how often you need to shrink a
 pool
  though?  Could this be classified more as a Home or SME level feature?

 Enterprise level especially in SAN environments need this.

 Projects own theyr own pools and constantly grow and *shrink* space.
 And they have no downtime available for that.

 give a +1 if you agree

 Thomas

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
Sent from my mobile device
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Bob Netherton

 Multiple pools on one server only makes sense if you are going to have
 different RAS for each pool for business reasons.  It's a lot easier to
 have a single pool though.  I recommend it.

A couple of other things to consider to go with that recommendation.

- never build a pool larger than you are willing to restore.   Bad
things can still happen that would require you to restore the entire
pool.  Convenience and SLAs aren't always in agreement :-)   The
advances in ZFS availability might make me look at my worst case
restore scenario a little different though - but there will still
be a restore case that worries me.

- as I look at the recent lifecycle improvements with zones (in the
Solaris 10 context of zones), I really like upgrade on attach.   That
means I will be slinging zones more freely.   So I need to design my
pools to match that philosophy.

- if you are using clustering technologies, pools will go hand in
hand with failover boundaries.   So if I have multiple failover
zones, I will have multiple pools.


Bob


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Nicolas Williams
On Sat, Feb 28, 2009 at 05:19:26PM -0600, Mike Gerdts wrote:
 On Sat, Feb 28, 2009 at 4:33 PM, Nicolas Williams
 nicolas.willi...@sun.com wrote:
  On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
    This may be interesting... I'm not sure how often you need to shrink a 
   pool
    though?  Could this be classified more as a Home or SME level feature?
 
  Enterprise level especially in SAN environments need this.
 
  Projects own theyr own pools and constantly grow and *shrink* space.
  And they have no downtime available for that.
 
  Multiple pools on one server only makes sense if you are going to have
  different RAS for each pool for business reasons.  It's a lot easier to
  have a single pool though.  I recommend it.
 
 Other scenarios for multiple pools include:
 
 - Need independent portability of data between servers.  For example,
 in a HA cluster environment, various workloads will be mapped to
 various pools.  Since ZFS does not do active-active clustering, a
 single pool for anything other than a simple active-standby cluster is
 not useful.

Right, but normally each head in a cluster will have only one pool
imported.

The Sun Storage 7xxx do this.  One pool per-head, two pools altogether
in a cluster.

 - Array based copies are needed.  There are times when copies of data
 are performed at a storage array level to allow testing and support
 operations to happen on different spindles.  For example, in a
 consolidated database environment, each database may be constrained to
 a set of spindles so that each database can be replicated or copied
 independent of the various others.

This gets you back into managing physical space allocation.  Do you
really want that?  If you're using zvols you can do array based copies
of you zvols.  If you're using filesystems then you should just use
normal backup tools.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Mike Gerdts
On Sat, Feb 28, 2009 at 8:34 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
 On Sat, Feb 28, 2009 at 05:19:26PM -0600, Mike Gerdts wrote:
 On Sat, Feb 28, 2009 at 4:33 PM, Nicolas Williams
 nicolas.willi...@sun.com wrote:
  On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
    This may be interesting... I'm not sure how often you need to shrink a 
   pool
    though?  Could this be classified more as a Home or SME level feature?
 
  Enterprise level especially in SAN environments need this.
 
  Projects own theyr own pools and constantly grow and *shrink* space.
  And they have no downtime available for that.
 
  Multiple pools on one server only makes sense if you are going to have
  different RAS for each pool for business reasons.  It's a lot easier to
  have a single pool though.  I recommend it.

 Other scenarios for multiple pools include:

 - Need independent portability of data between servers.  For example,
 in a HA cluster environment, various workloads will be mapped to
 various pools.  Since ZFS does not do active-active clustering, a
 single pool for anything other than a simple active-standby cluster is
 not useful.

 Right, but normally each head in a cluster will have only one pool
 imported.

Not necessarily.  Suppose I have a group of servers with a bunch of
zones.  Each zone represents a service group that needs to
independently fail over between servers.  In that case, I may have a
zpool per zone.  It seems this is how it is done in the real world.[1]

1. Upton, Tom. A  Conversation with Jason Hoffman.  ACM Queue.
January/February 2008. 9.

 The Sun Storage 7xxx do this.  One pool per-head, two pools altogether
 in a cluster.

Makes sense for your use case.  If you are looking at a zpool per
zone, it is likely a zpool created on a LUN provided by a Sun Storage
7xxx that is presented to multiple hosts.  That is, ZFS on top of ZFS.

 - Array based copies are needed.  There are times when copies of data
 are performed at a storage array level to allow testing and support
 operations to happen on different spindles.  For example, in a
 consolidated database environment, each database may be constrained to
 a set of spindles so that each database can be replicated or copied
 independent of the various others.

 This gets you back into managing physical space allocation.  Do you
 really want that?  If you're using zvols you can do array based copies
 of you zvols.  If you're using filesystems then you should just use
 normal backup tools.

There are times when you have no real choice.  If a regulation or a
lawyer's interpretation of a regulation says that you need to have
physically separate components, you need to have physically separate
components.  If your disaster recovery requirements mean that you need
to have a copy of data at a different site and array based copies have
historically been used - it is unlikely that while true ; do zfs send
| ssh | zfs receive will be adapted in the first round of
implementation.  Given this, zvols don't do it today.

When you have a smoking hole, the gap in transactions left by normal
backup tools is not always good enough - especially if some of that
smoke is coming from the tape library.  Array based replication tends
to allow you to keep much tighter tolerances on just how many
committed transactions you are willing to lose.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss