How I do recursive, selective snapshot destroys:
http://blog.clockworm.com/2008/03/remove-old-zfs-snapshots.html
Saturday, February 28, 2009, 10:14:20 PM, you wrote:
TW I would really add : make insane zfs destroy -r| poolname as
TW harmless as zpool destroy poolname (recoverable)
Hello George,
Tuesday, March 3, 2009, 3:01:43 PM, you wrote:
GW Matthew Ahrens wrote:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
I'm working on it.
automated installgrub when mirroring an rpool
GW I'm working on
Hello Thomas,
Saturday, February 28, 2009, 10:14:20 PM, you wrote:
TW I would really add : make insane zfs destroy -r| poolname as
TW harmless as zpool destroy poolname (recoverable)
TW zfs destroy -r| poolname|/filesystem
TW this should behave like that:
TW o snapshot the filesystem
Matthew Ahrens wrote:
Blake wrote:
zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)
I'd like to see:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
I'm working on it.
Richard Elling wrote:
David Magda wrote:
On Feb 27, 2009, at 18:23, C. Bergström wrote:
Blake wrote:
Care to share any of those in advance? It might be cool to see input
from listees and generally get some wheels turning...
raidz boot support in grub 2 is pretty high on my list to be
George Wilson wrote:
Along these lines you can envision a restore tool that is capable of
reading multiple 'zfs send' streams to construct the various versions
of files which are available. In addition, it would be nice if the
tool could read in the streams and then make it easy to traverse
Just my $0.02, but would pool shrinking be the same as vdev evacuation?
I'm quite interested in vdev evacuation as an upgrade path for
multi-disk pools. This would be yet another reason to for folks to use
ZFS at home (you only have to buy cheap disks), but it would also be a
good to have
Greg Mason wrote:
Just my $0.02, but would pool shrinking be the same as vdev evacuation?
Yes.
basically, what I'm thinking is:
zpool remove mypool list of devices/vdevs
Allow time for ZFS to vacate the vdev(s), and then light up the OK to
remove light on each evacuated disk.
That's the
excellent! i wasn't sure if that was the case, though i had heard rumors.
On Mon, Mar 2, 2009 at 12:36 PM, Matthew Ahrens matthew.ahr...@sun.com wrote:
Blake wrote:
zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)
I'd like to see:
David Magda wrote:
Given the threads that have appeared on this list lately, how about
codifying / standardizing the output of zfs send so that it can be
backed up to tape? :)
We will soon be changing the manpage to indicate that the zfs send stream
will be receivable on all future versions
Blake wrote:
zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)
I'd like to see:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
I'm working on it.
install to mirror
On Sat, Feb 28, 2009 at 09:45:12PM -0600, Mike Gerdts wrote:
On Sat, Feb 28, 2009 at 8:34 PM, Nicolas Williams
Right, but normally each head in a cluster will have only one pool
imported.
Not necessarily. Suppose I have a group of servers with a bunch of
zones. Each zone represents a
dm == David Magda dma...@ee.ryerson.ca writes:
dm Yes, in its current state; hopefully that will change some
dm point in the future
I don't think it will or should. A replication tool and a backup tool
seem similar, but they're not similar enough.
With replication, you want an exact
ma == Matthew Ahrens matthew.ahr...@sun.com writes:
ma We will soon be changing the manpage to indicate that the zfs
ma send stream will be receivable on all future versions of ZFS.
still not strong enough statement for this case:
old system new system
1. zfs send
cb == C Bergström cbergst...@netsyncro.com writes:
cb ideas for good zfs GSoC projects, but wanted to stir some
cb interest.
Read-only vdev support.
1. possibility to import a zpool on DVD. All filesystems within would
be read-only. DVD should be scrubbable: result would be a list
So nobody is interested in Raidz grow support? i.e. you have 4 disks is a
raidz and you only have room for a 5th disk(physically), so you add the 5th
disk to the raidz. It would be a great feature for a home server and its the
only thing stopping solaris going on my home file server.
On Tue,
On Mar 2, 2009, at 19:31, David wrote:
So nobody is interested in Raidz grow support? i.e. you have 4 disks
is a
raidz and you only have room for a 5th disk(physically), so you add
the 5th
disk to the raidz. It would be a great feature for a home server and
its the
only thing stopping
On Mar 2, 2009, at 18:37, Miles Nordin wrote:
And I'm getting frustrated pointing out these issues for the 10th
time [...]
http://www.xkcd.com/386/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Sat, 28 Feb 2009 21:45:12 -0600, Mike Gerdts
mger...@gmail.com wrote:
On Sat, Feb 28, 2009 at 8:34 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
[snip]
Right, but normally each head in a cluster will have only one pool
imported.
Not necessarily. Suppose I have a group of servers
On 28 Feb 2009, at 07:26, C. Bergström wrote:
Blake wrote:
Gnome GUI for desktop ZFS administration
With the libzfs java bindings I am plotting a web based interface..
I'm not sure if that would meet this gnome requirement though..
Knowing specifically what you'd want to do in that
I for one would like an interactive attribute for zpools and
filesystems, specifically for destroy.
The existing behavior (no prompt) could be the default, but all
filesystems would inherit from the zpool's attrib. so I'd only
need to set interactive=on for the pool itself, not for each
Shrinking pools would also solve the right-sizing dilemma.
Sent from my iPhone
On Feb 28, 2009, at 3:37 AM, Joe Esposito j...@j-espo.com wrote:
I'm using opensolaris and zfs at my house for my photography storage
as well as for an offsite backup location for my employer and several
side web
On Sat, Feb 28, 2009 at 1:20 AM, Richard Elling
richard.ell...@gmail.com wrote:
David Magda wrote:
On Feb 27, 2009, at 20:02, Richard Elling wrote:
At the risk of repeating the Best Practices Guide (again):
The zfs send and receive commands do not provide an enterprise-level
backup solution.
I'm using opensolaris and zfs at my house for my photography storage
as well as for an offsite backup location for my employer and several
side web projects.
I have an 80g drive as my root drive. I recently took posesion of 2
74g 10k drives which I'd love to add as a mirror to replace the 80 g
On Sat, Feb 28, 2009 at 8:31 AM, casper@sun.com wrote:
I'm using opensolaris and zfs at my house for my photography storage
as well as for an offsite backup location for my employer and several
side web projects.
I have an 80g drive as my root drive. I recently took posesion of 2
74g
On Sat, Feb 28, 2009 at 8:28 AM, Joe Esposito j...@j-espo.com wrote:
On Sat, Feb 28, 2009 at 8:31 AM, casper@sun.com wrote:
I'm using opensolaris and zfs at my house for my photography storage
as well as for an offsite backup location for my employer and several
side web projects.
On Sat, 28 Feb 2009, Tim wrote:
That's not entirely true. Maybe it will put it to shame at streaming
sequential I/O. The 10k drives will still wipe the floor with any modern
7200rpm drive for random IO and seek times.
Or perhaps streaming sequential I/O will have similar performance,
with
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
This may be interesting... I'm not sure how often you need to shrink a pool
though? Could this be classified more as a Home or SME level feature?
Enterprise level
I would really add : make insane zfs destroy -r| poolname as harmless as
zpool destroy poolname (recoverable)
zfs destroy -r| poolname|/filesystem
this should behave like that:
o snapshot the filesystem to be deleted (each, name it
@deletedby_operatorname_date)
o hide the snapshot
On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
This may be interesting... I'm not sure how often you need to shrink a
pool
though? Could this be
On Sat, Feb 28, 2009 at 4:33 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
This may be interesting...
Absolutely agree. I'l love to be able to free up some LUNs that I
don't need in the pool any more.
Also, concatenation of devices in a zpool would be great for devices
that have LUN limits. It also seems like it may be an easy thing to
implement.
-Aaron
On 2/28/09, Thomas Wagner
Multiple pools on one server only makes sense if you are going to have
different RAS for each pool for business reasons. It's a lot easier to
have a single pool though. I recommend it.
A couple of other things to consider to go with that recommendation.
- never build a pool larger than you
On Sat, Feb 28, 2009 at 05:19:26PM -0600, Mike Gerdts wrote:
On Sat, Feb 28, 2009 at 4:33 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a
On Sat, Feb 28, 2009 at 8:34 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
On Sat, Feb 28, 2009 at 05:19:26PM -0600, Mike Gerdts wrote:
On Sat, Feb 28, 2009 at 4:33 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote:
Blake wrote:
Care to share any of those in advance? It might be cool to see input
from listees and generally get some wheels turning...
raidz boot support in grub 2 is pretty high on my list to be honest..
Which brings up another question of where is the raidz stuff mostly?
RaidZ grow support
On Fri, Feb 27, 2009 at 11:23 PM, C. Bergström
cbergst...@netsyncro.comwrote:
Blake wrote:
Care to share any of those in advance? It might be cool to see input
from listees and generally get some wheels turning...
raidz boot support in grub 2 is pretty high on my
On Feb 27, 2009, at 18:23, C. Bergström wrote:
Blake wrote:
Care to share any of those in advance? It might be cool to see input
from listees and generally get some wheels turning...
raidz boot support in grub 2 is pretty high on my list to be honest..
Which brings up another question of
David Magda wrote:
On Feb 27, 2009, at 18:23, C. Bergström wrote:
Blake wrote:
Care to share any of those in advance? It might be cool to see input
from listees and generally get some wheels turning...
raidz boot support in grub 2 is pretty high on my list to be honest..
Which brings up
zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)
I'd like to see:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
install to mirror from the liveCD gui
zfs recovery tools
Gnome GUI for desktop ZFS administration
On Fri, Feb 27, 2009 at 9:13 PM, Blake blake.ir...@gmail.com wrote:
zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)
I'd like to see:
pool-shrinking (and an option to shrink disk A when i want
On Feb 27, 2009, at 20:02, Richard Elling wrote:
It wouldn't help. zfs send is a data stream which contains parts of
files,
not files (in the usual sense), so there is no real way to take a send
stream and extract a file, other than by doing a receive.
If you create a non-incremental
David Magda wrote:
On Feb 27, 2009, at 20:02, Richard Elling wrote:
It wouldn't help. zfs send is a data stream which contains parts of
files,
not files (in the usual sense), so there is no real way to take a send
stream and extract a file, other than by doing a receive.
If you create a
Blake wrote:
Gnome GUI for desktop ZFS administration
On Fri, Feb 27, 2009 at 9:13 PM, Blake blake.ir...@gmail.com wrote:
zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)
I'd like to see:
pool-shrinking (and an option to shrink
Blake wrote:
Gnome GUI for desktop ZFS administration
With the libzfs java bindings I am plotting a web based interface.. I'm
not sure if that would meet this gnome requirement though.. Knowing
specifically what you'd want to do in that interface would be good.. I
planned to compare it to
Care to share any of those in advance? It might be cool to see input
from listees and generally get some wheels turning...
On Wed, Feb 25, 2009 at 4:39 AM, C. Bergström
cbergst...@netsyncro.com wrote:
Hi everyone.
I've got a couple ideas for good zfs GSoC projects, but wanted to stir some
Hi everyone.
I've got a couple ideas for good zfs GSoC projects, but wanted to stir
some interest. Anyone interested to help mentor? The deadline is
around the corner so if planning hasn't happened yet it should start
soon. If there is interest who would the org administrator be?
Thanks
C. Bergström wrote:
Hi everyone.
I've got a couple ideas for good zfs GSoC projects, but wanted to stir
some interest. Anyone interested to help mentor? The deadline is
around the corner so if planning hasn't happened yet it should start
soon. If there is interest who would the org
48 matches
Mail list logo