Re: [zfs-discuss] shrinking a zpool - roadmap

2010-02-23 Thread Robert Milkowski

On 23/02/2010 17:20, Richard Elling wrote:

On Feb 23, 2010, at 5:10 AM, Robert Milkowski wrote:
   

On 23/02/2010 02:52, Richard Elling wrote:
 

On Feb 22, 2010, at 6:42 PM, Charles Hedrick wrote:

   

I talked with our enterprise systems people recently. I don't believe they'd 
consider ZFS until it's more flexible. Shrink is a big one, as is removing an 
slog. We also need to be able to expand a raidz, possibly by striping it with a 
second one and then rebalancing the sizes.

 

So what file system do they use that has all of these features? :-P


   

VxVM + VxFS?
 

I did know they still cost $$$, but I didn't know they implemented a slog :-P

   


you got me! :)
I missed the reference to a slog.

--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2010-02-23 Thread Richard Elling
On Feb 23, 2010, at 5:10 AM, Robert Milkowski wrote:
> On 23/02/2010 02:52, Richard Elling wrote:
>> On Feb 22, 2010, at 6:42 PM, Charles Hedrick wrote:
>>   
>>> I talked with our enterprise systems people recently. I don't believe 
>>> they'd consider ZFS until it's more flexible. Shrink is a big one, as is 
>>> removing an slog. We also need to be able to expand a raidz, possibly by 
>>> striping it with a second one and then rebalancing the sizes.
>>> 
>> So what file system do they use that has all of these features? :-P
>> 
>>   
> VxVM + VxFS?

I did know they still cost $$$, but I didn't know they implemented a slog :-P
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2010-02-23 Thread Robert Milkowski

On 23/02/2010 02:52, Richard Elling wrote:

On Feb 22, 2010, at 6:42 PM, Charles Hedrick wrote:
   

I talked with our enterprise systems people recently. I don't believe they'd 
consider ZFS until it's more flexible. Shrink is a big one, as is removing an 
slog. We also need to be able to expand a raidz, possibly by striping it with a 
second one and then rebalancing the sizes.
 

So what file system do they use that has all of these features? :-P

   

VxVM + VxFS?

--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2010-02-22 Thread Richard Elling
On Feb 22, 2010, at 6:42 PM, Charles Hedrick wrote:
> I talked with our enterprise systems people recently. I don't believe they'd 
> consider ZFS until it's more flexible. Shrink is a big one, as is removing an 
> slog. We also need to be able to expand a raidz, possibly by striping it with 
> a second one and then rebalancing the sizes.

So what file system do they use that has all of these features? :-P
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2010-02-22 Thread Charles Hedrick
I talked with our enterprise systems people recently. I don't believe they'd 
consider ZFS until it's more flexible. Shrink is a big one, as is removing an 
slog. We also need to be able to expand a raidz, possibly by striping it with a 
second one and then rebalancing the sizes.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2010-02-21 Thread Ralf Gans
Hello out there,

is there any progress in shrinking zpools?
i.e. removing vdevs from a pool?

Cheers,

Ralf
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-07 Thread Cindy . Swearingen

Hey Richard,

I believe 6844090 would be a candidate for an s10 backport.

The behavior of 6844090 worked nicely when I replaced a disk of the same
physical size even though the disks were not identical.

Another flexible storage feature is George's autoexpand property (Nevada
build 117), where you can attach or replace a disk in a pool with LUN
that is larger in size than the existing size of the pool, but you can
keep the LUN size constrained with autoexpand set to off.

Then, if you decide that you want to use the expanded LUN, you can set
autoexpand to on, or you can just detach it to use in another pool where 
you need the expanded size.


(The autoexpand feature description is in the ZFS Admin Guide on the
opensolaris/...zfs/docs site.)

Contrasting the autoexpand behavior to current Solaris 10 releases, I
noticed recently that you can use zpool attach/detach to attach a larger 
disk for eventual replacement purposes and the pool size is expanded

automatically, even on a live root pool, without the autoexpand feature
and no import/export/reboot is needed. (Well, I always reboot to see if
the new disk will boot before detaching the existing disk.)

I did this recently to expand a 16-GB root pool to 68-GB root pool.
See the example below.

Cindy

# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool  16.8G  5.61G  11.1G33%  ONLINE  -
# zpool status
   pool: rpool
  state: ONLINE
  scrub: none requested
config:

 NAME STATE READ WRITE CKSUM
 rpoolONLINE   0 0 0
   c1t18d0s0  ONLINE   0 0 0

errors: No known data errors
# zpool attach rpool c1t18d0s0 c1t1d0s0
# zpool status rpool
   pool: rpool
  state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
 continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scrub: resilver in progress for 0h3m, 51.35% done, 0h3m to go
config:

 NAME   STATE READ WRITE CKSUM
 rpool  ONLINE   0 0 0
   mirror   ONLINE   0 0 0
 c1t18d0s0  ONLINE   0 0 0
 c1t1d0s0   ONLINE   0 0 0
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
/dev/rdsk/c1t1d0s0

# init 0
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool  16.8G  5.62G  11.1G33%  ONLINE  -
# zpool status
   pool: rpool
  state: ONLINE
  scrub: none requested
config:

 NAME   STATE READ WRITE CKSUM
 rpool  ONLINE   0 0 0
   mirror   ONLINE   0 0 0
 c1t18d0s0  ONLINE   0 0 0
 c1t1d0s0   ONLINE   0 0 0

errors: No known data errors
# zpool detach rpool c1t18d0s0
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool  68.2G  5.62G  62.6G 8%  ONLINE  -
# cat /etc/release
Solaris 10 5/09 s10s_u7wos_08 SPARC
Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
 Use is subject to license terms.
  Assembled 30 March 2009



On 08/05/09 17:20, Richard Elling wrote:

On Aug 5, 2009, at 4:06 PM, cindy.swearin...@sun.com wrote:


Brian,

CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.

In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in that devices can be detached (as long as the redundancy
is not compromised) or replaced as long as the replacement disk is  an 
equivalent size or larger. So, you can move storage around if you  
need to in a mirrored ZFS config and until 4852783 integrates.



Thanks Cindy,
This is another way to skin the cat. It works for simple volumes, too.
But there are some restrictions, which could impact the operation when a
large change in vdev size is needed. Is this planned to be backported
to Solaris 10?

CR 6844090 has more details.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844090
  -- richard



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Bob Friesenhahn

On Fri, 7 Aug 2009, Henrik Johansson wrote:

"We're already looking forward to the next release due in 2010. Look out for 
great new features like an interactive installation for SPARC, the ability to 
install packages directly from the repository during the install, offline IPS 
support, a new version of the GNOME desktop, ZFS deduplication and user 
quotas, cloud integration and plenty more! As always, you can follow active 
development by adding the dev/ repository."


Clearly I was wrong and the ZFS deduplication announcement *is* as 
concrete as Apple's announcement of zfs support in Snow Leopard 
Server.


Sorry about that.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Henrik Johansson


On 6 aug 2009, at 23.52, Bob Friesenhahn  
 wrote:
I still have not seen any formal announcement from Sun regarding  
deduplication.  Everything has been based on remarks from code  
developers.




To be fair, the official "what's new" document for 2009.06 states that  
dedup will be part of the next OSOL release in 2010. Or at least that  
we should "look out" for it ;)
"We're already looking forward to the next release due in 2010. Look  
out for great new features like an interactive installation for SPARC,  
the ability to install packages directly from the repository during  
the install, offline IPS support, a new version of the GNOME desktop,  
ZFS deduplication and user quotas, cloud integration and plenty more!  
As always, you can follow active development by adding the dev/  
repository."



Henrik
http://sparcv9.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Mattias Pantzare
On Thu, Aug 6, 2009 at 16:59, Ross wrote:
> But why do you have to attach to a pool?  Surely you're just attaching to the 
> root
> filesystem anyway?  And as Richard says, since filesystems can be shrunk 
> easily
> and it's just as easy to detach a filesystem from one machine and attach to 
> it from
> another, why the emphasis on pools?

What filesystems are you talking about?
A zfs pool can be "attached" to one and only one computer at any given time.
All file systems in that pool are "attached" to the same computer.

>
> For once I'm beginning to side with Richard, I just don't understand why data 
> has to
> be in separate pools to do this.

All accounting for data and free blocks are done at the pool level.
That is why you can share space between file systems. You could write
code that made ZFS a cluster file system, maybe just for the pool but
that is a lot of work and would require all attached computer so talk
to each other.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Bob Friesenhahn

On Thu, 6 Aug 2009, Nigel Smith wrote:


I guess it depends on the rate of progress of ZFS compared to say btrfs.


Btrfs is still an infant whereas zfs is now into adolescence.


I would say that maybe Sun should have held back on
announcing the work on deduplication, as it just seems to


I still have not seen any formal announcement from Sun regarding 
deduplication.  Everything has been based on remarks from code 
developers.


It is not as concrete and definite as Apple's announcement of zfs 
inclusion in Snow Leopard Server.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Nigel Smith
Hi Darren,

Darren J Moffat wrote:
> That is no different to the vast majority of Open Source projects
> either. Open Source and Open Development usually don't give you access
> to individuals work in progress.

Yes thats true. But there are more 'open' models for running
an open source project.

For instance, Sun's project for the Comstar iscsi target:

  http://www.opensolaris.org/os/project/iser/

..where there was an open mailing list, where you
could see the developers making progress:

  http://mail.opensolaris.org/pipermail/iser-dev/

Best Regards
Nigel Smith
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Nigel Smith
ob Friesenhahn wrote:
> Sun has placed themselves in the interesting predicament that being 
> open about progress on certain high-profile "enterprise" features 
> (such as shrink and de-duplication) could cause them to lose sales to 
> a competitor.  Perhaps this is a reason why Sun is not nearly as open 
> as we would like them to be.

I agree that it is difficult for Sun, at this time, to 
be more 'open', especially for ZFS, as we still await the resolution
of Oracle purchasing Sun, the court case with NetApp over patents,
and now the GreenBytes issue!

But I would say they are more likely to avoid loosing sales
by confirming what enhancements they are prioritising.
I think people will wait if they know work is being done,
and progress being made, although not indefinitely.

I guess it depends on the rate of progress of ZFS compared to say btrfs.

I would say that maybe Sun should have held back on
announcing the work on deduplication, as it just seems to 
have ramped up frustration, now that it seems no
more news is forthcoming. It's easy to be wise after the event
and time will tell.

Thanks
Nigel Smith
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Ian Collins

Greg Mason wrote:



What is the downtime for doing a send/receive? What is the downtime
for zpool export, reconfigure LUN, zpool import?
  
We have a similar situation. Our home directory storage is based on 
many X4540s. Currently, we use rsync to migrate volumes between 
systems, but our process could very easily be switched over to zfs 
send/receive (and very well may be in the near future).


What this looks like, if using zfs send/receive, is we perform an 
initial send (get the bulk of the data over), and then at a planned 
downtime, do an incremental send to "catch up" the destination. This 
"catch up" phase is usually a very small fraction of the overall size 
of the volume. The only downtime required is from just before the 
final snapshot you send (the last incremental), and when the send 
finishes, and turning up whatever service(s) on the destination 
system. If the filesystem a lot of write activity, you can run 
multiple incrementals to decrease the size of that last snapshot. As 
far as backing out goes, you can simply destroy the destination 
filesystem, and continue running on the original system, if all hell 
breaks loose (of course that never happens, right? :)


That is how I migrate services (zones) and their data between hosts with 
one of my clients.  The big advantage of zfs send/receive over rsync is 
the final replication is very fast.  Run a send/receive just before the 
migration than top up after the service shuts down.  The last one we 
moved was a mail server with 1TB of small files and the downtime was 
under 2 minutes.  The biggest delay was sending the "start" and "done" 
text messages!


When everything checks out (which you can safely assume when the recv 
finishes, thanks to how ZFS send/recv works), you then just have to 
destroy the original fileystem. It is correct in that this doesn't 
shrink the pool, but it's at least a workaround to be able to swing 
filesystems around to different systems. If you had only one 
filesystem in the pool, you could then safely destroy the original 
pool. This does mean you'd need 2x the size of the LUN during the 
transfer though.


For replication of ZFS filesystems, we a similar process, with just a 
lot of incremental sends.

Same here.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Richard Elling


On Aug 6, 2009, at 7:59 AM, Ross wrote:

But why do you have to attach to a pool?  Surely you're just  
attaching to the root filesystem anyway?  And as Richard says, since  
filesystems can be shrunk easily and it's just as easy to detach a  
filesystem from one machine and attach to it from another, why the  
emphasis on pools?


For once I'm beginning to side with Richard, I just don't understand  
why data has to be in separate pools to do this.


welcome to the dark side... bwahahahaa :-)

The way I've always done such migrations in the past is to get  
everything ready
in parallel, then restart the service pointing to the new data.  The  
cost is a tiny bit
and a restart, which isn't a big deal for most modern system  
architectures.  If you
have a high availability cluster, just add it to the list of things to  
do when you

do a weekly/monthly/quarterly failover.

Now, if I was to work in a shrink, I would do the same because  
shrinking moves data
and moving data is risky. Perhaps someone could explain how they do a  
rollback

from a shrink? Snapshots?

I think the problem at the example company is that they make storage so
expensive that the (internal) customers spend way too much time and  
money
trying to figure out how to optimally use it. The storage market is  
working
against this model by reducing the capital cost of storage. ZFS is  
tackling many
of the costs related to managing storage. Clearly, there is still work  
to be done,
but the tide is going out and will leave expensive storage solutions  
high and dry.


Consider how different the process would be as the total cost of  
storage approaches
zero. Would shrink need to exist? The answer is probably no. But the  
way shrink is
being solved in ZFS has another application. Operators can still make  
mistakes with
"add" vs "attach" so the ability to remove a top-level vdev is needed.  
Once this is

solved, shrink is also solved.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Ross
But why do you have to attach to a pool?  Surely you're just attaching to the 
root filesystem anyway?  And as Richard says, since filesystems can be shrunk 
easily and it's just as easy to detach a filesystem from one machine and attach 
to it from another, why the emphasis on pools?

For once I'm beginning to side with Richard, I just don't understand why data 
has to be in separate pools to do this.

The only argument I can think of is for performance since pools use completely 
separate sets of disks.  I don't know if zfs offers a way to throttle 
filesystems, but surely that could be managed at the network interconnect level?

I have to say that I have no experience of enterprise class systems, these 
questions are purely me playing devils advocate as I learn :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Bob Friesenhahn

On Thu, 6 Aug 2009, Cyril Plisko wrote:


May I suggest using this forum (zfs-discuss) to periodically report 
the progress ? Chances are that most of the people waiting for this 
feature reading this list.


Sun has placed themselves in the interesting predicament that being 
open about progress on certain high-profile "enterprise" features 
(such as shrink and de-duplication) could cause them to lose sales to 
a competitor.  Perhaps this is a reason why Sun is not nearly as open 
as we would like them to be.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Greg Mason



What is the downtime for doing a send/receive? What is the downtime
for zpool export, reconfigure LUN, zpool import?
  
We have a similar situation. Our home directory storage is based on many 
X4540s. Currently, we use rsync to migrate volumes between systems, but 
our process could very easily be switched over to zfs send/receive (and 
very well may be in the near future).


What this looks like, if using zfs send/receive, is we perform an 
initial send (get the bulk of the data over), and then at a planned 
downtime, do an incremental send to "catch up" the destination. This 
"catch up" phase is usually a very small fraction of the overall size of 
the volume. The only downtime required is from just before the final 
snapshot you send (the last incremental), and when the send finishes, 
and turning up whatever service(s) on the destination system. If the 
filesystem a lot of write activity, you can run multiple incrementals to 
decrease the size of that last snapshot. As far as backing out goes, you 
can simply destroy the destination filesystem, and continue running on 
the original system, if all hell breaks loose (of course that never 
happens, right? :)


When everything checks out (which you can safely assume when the recv 
finishes, thanks to how ZFS send/recv works), you then just have to 
destroy the original fileystem. It is correct in that this doesn't 
shrink the pool, but it's at least a workaround to be able to swing 
filesystems around to different systems. If you had only one filesystem 
in the pool, you could then safely destroy the original pool. This does 
mean you'd need 2x the size of the LUN during the transfer though.


For replication of ZFS filesystems, we a similar process, with just a 
lot of incremental sends.


Greg Mason
System Administrator
High Performance Computing Center
Michigan State University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Brian Kolaci


On Aug 6, 2009, at 5:36 AM, Ian Collins  wrote:


Brian Kolaci wrote:


They understand the technology very well.  Yes, ZFS is very  
flexible with many features, and most are not needed in an  
enterprise environment where they have high-end SAN storage that is  
shared between Sun, IBM, linux, VMWare ESX and Windows.  Local disk  
is only for the OS image.  There is no need to have an M9000 be a  
file server.  They have NAS for that.  They use SAN across the  
enterprise and it gives them the ability to fail-over to servers in  
other data centers very quickly.


Different business groups cannot share the same pool for many  
reasons.  Each business group pays for their own storage.  There  
are legal issues as well, and in fact cannot have different  
divisions on the same frame let alone shared storage.  But they're  
in a major virtualization push to the point that nobody will be  
allowed to be on their own physical box.  So the big push is to  
move to VMware, and we're trying to salvage as much as we can to  
move them to containers and LDoms.  That being the case, I've  
recommended that each virtual machine on either a container or LDom  
should be allocated their own zpool, and the zonepath or LDom disk  
image be on their own zpool.  This way when (not if) they need to  
migrate to another system, they have one pool to move over.  They  
use fixed sized LUNs, so the granularity is a 33GB LUN, which can  
be migrated.  This is also the case for their clusters as well as  
SRDF to their COB machines.


If they accept virtualisation, why can't they use individual  
filesystems (or zvol) rather than pools?  What advantage do  
individual pools have over filesystems?  I'd have thought the main  
disadvantage of pools is storage flexibility requires pool shrink,  
something ZFS provides at the filesystem (or zvol) level.


--
Ian.



For failover scenarios you need a pool per application so they can  
move the application between servers which may be in different  
datacenters and each app on one server can fail over to a different  
server. So the storage needs to be partitioned as such. The failover  
entails moving or rerouting San. 
___

zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Darren J Moffat

Ross wrote:

But with export / import, are you really saying that you're going to physically 
move 100GB of disks from one system to another?


zpool export/import would not move anything on disk.  It just changes 
which host the pool is attached to.  This is exactly how cluster 
failover works in the SS7000 systems.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Ross
But with export / import, are you really saying that you're going to physically 
move 100GB of disks from one system to another?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Darren J Moffat

Nigel Smith wrote:

Hi Matt
Thanks for this update, and the confirmation
to the outside world that this problem is being actively
worked on with significant resources.

But I would like to support Cyril's comment.

AFAIK, any updates you are making to bug 4852783 are not
available to the outside world via the normal bug URL.
It would be useful if we were able to see them.

I think it is frustrating for the outside world that
it cannot see Sun's internal source code repositories
for work in progress, and only see the code when it is
complete and pushed out.


That is no different to the vast majority of Open Source projects 
either.  Open Source and Open Development usually don't give you access 
to individuals work in progress.


Compare this to Linux kernel development, you usually don't get to see 
the partially implemented drivers or changes until they are requesting 
integration into the kernel.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Mattias Pantzare
On Thu, Aug 6, 2009 at 12:45, Ian Collins wrote:
> Mattias Pantzare wrote:
>>>
>>> If they accept virtualisation, why can't they use individual filesystems
>>> (or
>>> zvol) rather than pools?  What advantage do individual pools have over
>>> filesystems?  I'd have thought the main disadvantage of pools is storage
>>> flexibility requires pool shrink, something ZFS provides at the
>>> filesystem
>>> (or zvol) level.
>>>
>>
>> You can move zpools between computers, you can't move individual file
>> systems.
>>
>>
>
> send/receive?

:-)
What is the downtime for doing a send/receive? What is the downtime
for zpool export, reconfigure LUN, zpool import?

And you still need to shrink the pool.

Move a 100Gb application from server A to server B using send/receive
and you will have 100Gb stuck on server A that you can't use on server
B where you relay need it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Ian Collins

Mattias Pantzare wrote:

If they accept virtualisation, why can't they use individual filesystems (or
zvol) rather than pools?  What advantage do individual pools have over
filesystems?  I'd have thought the main disadvantage of pools is storage
flexibility requires pool shrink, something ZFS provides at the filesystem
(or zvol) level.



You can move zpools between computers, you can't move individual file systems.

  

send/receive?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Nigel Smith
Hi Matt
Thanks for this update, and the confirmation
to the outside world that this problem is being actively
worked on with significant resources.

But I would like to support Cyril's comment.

AFAIK, any updates you are making to bug 4852783 are not
available to the outside world via the normal bug URL.
It would be useful if we were able to see them.

I think it is frustrating for the outside world that
it cannot see Sun's internal source code repositories
for work in progress, and only see the code when it is
complete and pushed out.

And so there is no way to judge what progress is being made,
or to actively help with code reviews or testing.

Best Regards
Nigel Smith
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Mattias Pantzare
> If they accept virtualisation, why can't they use individual filesystems (or
> zvol) rather than pools?  What advantage do individual pools have over
> filesystems?  I'd have thought the main disadvantage of pools is storage
> flexibility requires pool shrink, something ZFS provides at the filesystem
> (or zvol) level.

You can move zpools between computers, you can't move individual file systems.

Remember that there is a SAN involved. The disk array does not run Solaris.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Ian Collins

Brian Kolaci wrote:


They understand the technology very well.  Yes, ZFS is very flexible 
with many features, and most are not needed in an enterprise 
environment where they have high-end SAN storage that is shared 
between Sun, IBM, linux, VMWare ESX and Windows.  Local disk is only 
for the OS image.  There is no need to have an M9000 be a file 
server.  They have NAS for that.  They use SAN across the enterprise 
and it gives them the ability to fail-over to servers in other data 
centers very quickly.


Different business groups cannot share the same pool for many 
reasons.  Each business group pays for their own storage.  There are 
legal issues as well, and in fact cannot have different divisions on 
the same frame let alone shared storage.  But they're in a major 
virtualization push to the point that nobody will be allowed to be on 
their own physical box.  So the big push is to move to VMware, and 
we're trying to salvage as much as we can to move them to containers 
and LDoms.  That being the case, I've recommended that each virtual 
machine on either a container or LDom should be allocated their own 
zpool, and the zonepath or LDom disk image be on their own zpool.  
This way when (not if) they need to migrate to another system, they 
have one pool to move over.  They use fixed sized LUNs, so the 
granularity is a 33GB LUN, which can be migrated.  This is also the 
case for their clusters as well as SRDF to their COB machines.


If they accept virtualisation, why can't they use individual filesystems 
(or zvol) rather than pools?  What advantage do individual pools have 
over filesystems?  I'd have thought the main disadvantage of pools is 
storage flexibility requires pool shrink, something ZFS provides at the 
filesystem (or zvol) level.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Cyril Plisko
>
> It is unfortunately a very difficult problem, and will take some time to
> solve even with the application of all possible resources (including the
> majority of my time).  We are updating CR 4852783 at least once a month with
> progress reports.

Matt,

should these progress reports be visible via [1] ?

Right now it doesn't seem to be available. Moreover, it says the last
update was 6-May-2009.

May I suggest using this forum (zfs-discuss) to periodically report
the progress ?
Chances are that most of the people waiting for this feature reading this list.



[1] http://bugs.opensolaris.org/view_bug.do?bug_id=4852783

-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Ross
And along those lines, why stop at SSD's?  Get ZFS shrink working, and Sun 
could release a set of upgrade kits for x4500's and x4540's.  Kits could range 
from a couple of SSD devices to crazy specs like 40 2TB drives, and 8 SSD's.

And zpool shrink would be a key facilitator driving sales of these.  As Jordan 
says, if you can shrink your pool down, you can create space to fit the SSD 
devices.  However, shrinking the pool also allows you to upgrade the drives 
much more quickly.  

If you have a 46 disk zpool, you can't replace many disks at once, and the 
upgrade is high risk if you're running single parity raid.  Provided the pool 
isn't full however, if you can shrink it down to say 40 drives first, you can 
then upgrade in batches of 6 at once.  The zpool replace is then an operation 
between two fully working disks, and doesn't affect pool integrity at all.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Elizabeth Schwartz
A lot of us have run *with * the ability to shrink because we were
using Veritas. Once you have a feature, processes tend to expand to
use it. Moving to ZFS was a good  move for many reasons but I still
missed being able to do something that used to be so easy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Martin
Bob wrote:

> Perhaps the problem is one of educating the customer
> so that they can 
> ammend their accounting practices.  Different
> business groups can 
> share the same pool if necessary.

Bob, while I don't mean to pick on you, that statement captures a major 
thinking flaw in IT when it comes to sales.

Yes, Brian should do everything possible to shape the customer's expectations; 
that's his job.

At the same time, let's face it.  If the customer thinks he needs X (whether or 
not he really does) and Brian can't get him to move away from it, Brian is 
sunk.  Here Brian sits with a potential multi-million dollar sale which is 
stuck on a missing feature, and probably other obstacles.  The truth is that 
the other obstacles are irrelevant as long as the customer can't get past 
feature X, valid or not.

So millions of dollars to Sun hang in the balance and these discussions revolve 
around whether or not the customer is planning optimally.  Imagine how much 
rapport Brian will gain when he tells this guy, "You know, if you guys just 
planned better, you wouldn't need feature X."  Brian would probably not get his 
phone calls returned after that.

You can rest assured that when the customer meets with IBM the next day, the 
IBM rep won't let the customer get away from feature X that JFS has.  The 
conversation might go like this.

Customer: You know, we are really looking at Sun and ZFS.

IBM: Of course you are, because that's a wise thing to do.  ZFS has a lot of 
exciting potential.

Customer: Huh?

IBM: ZFS has a solid base and Sun is adding features which will make it quite 
effective for your applications.

Customer: So you like ZFS?

IBM: Absolutely.  At some point it will have the features you need.  You 
mentioned you use feature X to provide the flexibility you have to continue to 
outperform your competition during this recession.  I understand Sun is working 
hard to integrate that feature, even as we speak.

Customer: Maybe we don't need feature X.

IBM: You would know more than I.  When did you last use feature X?

Customer: We used X last quarter when we scrambled to add FOO to our product 
mix so that we could beat our competition to market.

IBM: How would it have been different if feature X was unavailable?

Customer (mind racing): We would have found a way.

IBM: Of course, as innovative as your company is, you would have found a way.  
How much of a delay?

Customer (thinking through the scenarios): I don't know.

IBM: It wouldn't have impacted the rollout, would it?

Customer: I don't know.

IBM: Even if it did delay things, the delay wouldn't blow back on you, right?

Customer (sweating): I don't think so.

Imagine the land mine Brian now has to overcome when he tries to convince the 
customer that they don't need feature X, and even if they do, Sun will have it 
"real soon now."

Does anyone really think that Oracle made their money lecturing customers on 
how Table Partitions are stupid and if the customer would have planned their 
schema better, they wouldn't need them anyway?  Of course not.  People wanted 
partitions (valid or not) and Oracle delivered.

Marty
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Jordan Schwartz
>Preface: yes, shrink will be cool.  But we've been running highly
available,
>mission critical datacenters for more than 50 years without shrink being
>widely available.

Agreed, and shrink IS cool, I used it to migrate VxVM volumes from direct
attached storage to slightly smaller SAN  LUNS on a solaris sparc box.  It
sure is nice to add the new storage to the volume and mirror as opposed to
copying to a new filesystem.

It will be cool when SSDs are released for my fully loaded x4540s, if I can
migrate enough users off and shrink the pool perhaps I can drop a couple of
SATA disks and then add the SSDs, all on the fly.

Perhaps Steve Martin said it best, "Let's get real small!".

Thanks,

Jordan


On Wed, Aug 5, 2009 at 12:47 PM, Richard Elling wrote:

> Preface: yes, shrink will be cool.  But we've been running highly
> available,
> mission critical datacenters for more than 50 years without shrink being
> widely available.
>
> On Aug 5, 2009, at 9:17 AM, Martin wrote:
>
>> You are the 2nd customer I've ever heard of to use shrink.
>>>
>>
>> This attitude seems to be a common theme in ZFS discussions: "No
>> enterprise uses shrink, only grow."
>>
>> Maybe.  The enterprise I work for requires that every change be reversible
>> and repeatable.  Every change requires a backout plan and that plan better
>> be fast and nondisruptive.
>>
>
> Do it exactly the same way you do it for UFS.  You've been using UFS
> for years without shrink, right?  Surely you have procedures in place :-)
>
>  Who are these enterprise admins who can honestly state that they have no
>> requirement to reverse operations?
>>
>
> Backout plans are not always simple reversals.  A well managed site will
> have procedures for rolling upgrades.
>
>  Who runs a 24x7 storage system and will look you in the eye and state,
>> "The storage decisions (parity count, number of devices in a stripe, etc.)
>> that I make today will be valid until the end of time and will NEVER need
>> nondisruptive adjustment.  Every storage decision I made in 1993 when we
>> first installed RAID is still correct and has needed no changes despite
>> changes in our business models."
>>
>> My experience is that this attitude about enterprise storage borders on
>> insane.
>>
>> Something does not compute.
>>
>
> There is more than one way to skin a cat.
>  -- richard
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Matthew Ahrens

Brian Kolaci wrote:
So Sun would see increased hardware revenue stream if they would just 
listen to the customer...  Without [pool shrink], they look for alternative 
hardware/software vendors.


Just to be clear, Sun and the ZFS team are listening to customers on this 
issue.  Pool shrink has been one of our top priorities for some time now.


It is unfortunately a very difficult problem, and will take some time to 
solve even with the application of all possible resources (including the 
majority of my time).  We are updating CR 4852783 at least once a month with 
progress reports.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Bob Friesenhahn

On Wed, 5 Aug 2009, Richard Elling wrote:


Thanks Cindy,
This is another way to skin the cat. It works for simple volumes, too.
But there are some restrictions, which could impact the operation when a
large change in vdev size is needed. Is this planned to be backported
to Solaris 10?

CR 6844090 has more details.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844090


A potential partial solution is to have a pool creation option where 
the tail device labels are set to a point much smaller than the device 
size rather than being written to the end of the device.  As zfs 
requires more space, the tail device labels are moved to add 
sufficient free space that storage blocks can again be efficiently 
allocated.  Since no zfs data is written beyond the tail device 
labels, the storage LUN could be truncated down to the point where the 
tail device labels are still left intact.  This seems like minimal 
impact to ZFS and no user data would need to be migrated.


If the user's usage model tends to periodically fill the whole LUN 
rather than to gradually grow, then this approach won't work.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Brian Kolaci

cindy.swearin...@sun.com wrote:

Brian,

CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.


Will do.  I thought I was on it, but didn't see any updates...



In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in that devices can be detached (as long as the redundancy
is not compromised) or replaced as long as the replacement disk is an 
equivalent size or larger. So, you can move storage around if you need 
to in a mirrored ZFS config and until 4852783 integrates.


Yes, we're trying to push that through now (make a ZFS root).  But the case I 
was more concerned about was the back-end storage for LDom guests and 
zonepaths.  All the SAN storage coming in is already RAID on EMC or Hitachi, 
and they just move the storage around through the SAN group.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Richard Elling

On Aug 5, 2009, at 4:06 PM, cindy.swearin...@sun.com wrote:


Brian,

CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.

In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in that devices can be detached (as long as the redundancy
is not compromised) or replaced as long as the replacement disk is  
an equivalent size or larger. So, you can move storage around if you  
need to in a mirrored ZFS config and until 4852783 integrates.


Thanks Cindy,
This is another way to skin the cat. It works for simple volumes, too.
But there are some restrictions, which could impact the operation when a
large change in vdev size is needed. Is this planned to be backported
to Solaris 10?

CR 6844090 has more details.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844090
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Brian Kolaci

Bob Friesenhahn wrote:

On Wed, 5 Aug 2009, Brian Kolaci wrote:


I have a customer that is trying to move from VxVM/VxFS to ZFS, 
however they have this same need.  They want to save money and move to 
ZFS.  They are charged by a separate group for their SAN storage 
needs.  The business group storage needs grow and shrink over time, as 
it has done for years.  They've been on E25K's and other high power 
boxes with VxVM/VxFS as their encapsulated root disk for over a 
decade.  They are/were a big Veritas shop. They rarely ever use UFS, 
especially in production.


ZFS is a storage pool and not strictly a filesystem.  One may create 
filesystems or logical volumes out of this storage pool.  The logical 
volumes can be exported via iSCSI or FC (COMSTAR).  Filesystems may be 
exported via NFS or CIFS.  ZFS filesystems support quotas for both 
maximum consumption, and minimum space reservation.


Perhaps the problem is one of educating the customer so that they can 
ammend their accounting practices.  Different business groups can share 
the same pool if necessary.


They understand the technology very well.  Yes, ZFS is very flexible with many 
features, and most are not needed in an enterprise environment where they have 
high-end SAN storage that is shared between Sun, IBM, linux, VMWare ESX and 
Windows.  Local disk is only for the OS image.  There is no need to have an 
M9000 be a file server.  They have NAS for that.  They use SAN across the 
enterprise and it gives them the ability to fail-over to servers in other data 
centers very quickly.

Different business groups cannot share the same pool for many reasons.  Each 
business group pays for their own storage.  There are legal issues as well, and 
in fact cannot have different divisions on the same frame let alone shared 
storage.  But they're in a major virtualization push to the point that nobody 
will be allowed to be on their own physical box.  So the big push is to move to 
VMware, and we're trying to salvage as much as we can to move them to 
containers and LDoms.  That being the case, I've recommended that each virtual 
machine on either a container or LDom should be allocated their own zpool, and 
the zonepath or LDom disk image be on their own zpool.  This way when (not if) 
they need to migrate to another system, they have one pool to move over.  They 
use fixed sized LUNs, so the granularity is a 33GB LUN, which can be migrated.  
This is also the case for their clusters as well as SRDF to their COB machines.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Cindy . Swearingen

Brian,

CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.

In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in that devices can be detached (as long as the redundancy
is not compromised) or replaced as long as the replacement disk is an 
equivalent size or larger. So, you can move storage around if you need 
to in a mirrored ZFS config and until 4852783 integrates.


cs

On 08/05/09 15:58, Brian Kolaci wrote:
I'm chiming in late, but have a mission critical need of this as well 
and posted as a non-member before.  My customer was wondering when this 
would make it into Solaris 10.  Their complete adoption depends on it.


I have a customer that is trying to move from VxVM/VxFS to ZFS, however 
they have this same need.  They want to save money and move to ZFS.  
They are charged by a separate group for their SAN storage needs.  The 
business group storage needs grow and shrink over time, as it has done 
for years.  They've been on E25K's and other high power boxes with 
VxVM/VxFS as their encapsulated root disk for over a decade.  They 
are/were a big Veritas shop.  They rarely ever use UFS, especially in 
production.


They absolutely require the shrink functionality to completely move off 
VxVM/VxFS to ZFS, and we're talking $$millions.  I think your statements 
below are from a technology standpoint, not a business standpoint.  You 
say its poor planning, which is way off the mark.  Business needs change 
daily.  It takes several weeks to provision SAN with all the approvals, 
etc. and it it takes massive planning.  That goes for increasing as well 
as decreasing their storage needs.


Richard Elling wrote:


On Aug 5, 2009, at 1:06 PM, Martin wrote:


richard wrote:


Preface: yes, shrink will be cool.  But we've been
running highly
available,
mission critical datacenters for more than 50 years
without shrink being
widely available.



I would debate that.  I remember batch windows and downtime delaying 
one's career movement.  Today we are 24x7 where an outage can kill an 
entire business



Agree.


Do it exactly the same way you do it for UFS.  You've
been using UFS
for years without shrink, right?  Surely you have
procedures in
place :-)



While I haven't taken a formal survey, everywhere I look I see JFS on 
AIX and VxFS on Solaris.  I haven't been in a production UFS shop 
this decade.



Then why are you talking on Solaris forum?  All versions of
Solaris prior to Solaris 10 10/08 only support UFS for boot.


Backout plans are not always simple reversals.  A
well managed site will
have procedures for rolling upgrades.



I agree with everything you wrote.  Today other technologies allow 
live changes to the pool, so companies use those technologies instead 
of ZFS.



... and can continue to do so. If you are looking to replace a
for-fee product with for-free, then you need to consider all
ramifications. For example, a shrink causes previously written
data to be re-written, thus exposing the system to additional
failure modes. OTOH, a model of place once and never disrupt
can provide a more reliable service. You will see the latter
"pattern" repeated often for high assurance systems.




There is more than one way to skin a cat.



Which entirely misses the point.



Many cases where people needed to shrink were due to the
inability to plan for future growth. This is compounded by the
rather simplistic interface between a logical volume and traditional
file system. ZFS allows you to dynamically grow the pool, so you
can implement a process of only adding storage as needs dictate.

Bottom line: shrink will be cool, but it is not the perfect solution for
managing changing data needs in a mission critical environment.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Brian Kolaci

Richard Elling wrote:

On Aug 5, 2009, at 2:58 PM, Brian Kolaci wrote:

I'm chiming in late, but have a mission critical need of this as well 
and posted as a non-member before.  My customer was wondering when 
this would make it into Solaris 10.  Their complete adoption depends 
on it.


I have a customer that is trying to move from VxVM/VxFS to ZFS, 
however they have this same need.  They want to save money and move to 
ZFS.  They are charged by a separate group for their SAN storage 
needs.  The business group storage needs grow and shrink over time, as 
it has done for years.  They've been on E25K's and other high power 
boxes with VxVM/VxFS as their encapsulated root disk for over a 
decade.  They are/were a big Veritas shop.  They rarely ever use UFS, 
especially in production.


They absolutely require the shrink functionality to completely move 
off VxVM/VxFS to ZFS, and we're talking $$millions.  I think your 
statements below are from a technology standpoint, not a business 
standpoint.


If you look at it from Sun's business perspective, ZFS is $$ free, so 
Sun gains

no $$ millions by replacing VxFS. Indeed, if the customer purchases VxFS
from Sun, it makes little sense for Sun to eliminate a revenue source. 
OTOH,

I'm sure if they are willing to give Sun $$ millions, it can help raise the
priority of CR 4852783.
http://bugs.opensolaris.org/view_bug.do?bug_id=4852783


They're probably on the list already, but I'll check to make sure.
What I meant by the $$ millions is that currently all Sun hardware purchases are on hold.  Deploying on 
Solaris currently means not just the hardware, but the support, required certified third-party software 
such as EMC powerpath, Veritas VxVM & VxFS, BMC monitoring, and more...  Yes, I'm still working the 
MPxIO to replace powerpath, but there's issues there too.  They will not use UFS.  Right now ZFS is OK 
for limited deployment and no production use.  Their case on ZFS is that its good for dealing with 
JBOD, but it not yet "enterprise ready" for SAN use.  Shrinking a volume is just one of a 
list of requirements to move toward "enterprise ready", however many issues have been fixed.

So Sun would see increased hardware revenue stream if they would just listen to 
the customer...  Without it, they look for alternative hardware/software 
vendors.  While this is stalled, there have been several hundred systems that 
have been flipped to competitors (and this is still going on).  So lack of this 
feature will cause $$ millions to be lost...




You say its poor planning, which is way off the mark.  Business needs 
change daily.  It takes several weeks to provision SAN with all the 
approvals, etc. and it it takes massive planning.  That goes for 
increasing as well as decreasing their storage needs.


I think you've identified the real business problem. A shrink feature in 
ZFS will
do nothing to fix this. A business who's needs change faster than their 
ability
to react has (as we say in business school) an unsustainable business 
model.

 -- richard


Yes, hence a federal bail-out.  However a shrink feature will help them to be 
able to spend more with Sun.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Bob Friesenhahn

On Wed, 5 Aug 2009, Brian Kolaci wrote:


I have a customer that is trying to move from VxVM/VxFS to ZFS, however they 
have this same need.  They want to save money and move to ZFS.  They are 
charged by a separate group for their SAN storage needs.  The business group 
storage needs grow and shrink over time, as it has done for years.  They've 
been on E25K's and other high power boxes with VxVM/VxFS as their 
encapsulated root disk for over a decade.  They are/were a big Veritas shop. 
They rarely ever use UFS, especially in production.


ZFS is a storage pool and not strictly a filesystem.  One may create 
filesystems or logical volumes out of this storage pool.  The logical 
volumes can be exported via iSCSI or FC (COMSTAR).  Filesystems may be 
exported via NFS or CIFS.  ZFS filesystems support quotas for both 
maximum consumption, and minimum space reservation.


Perhaps the problem is one of educating the customer so that they can 
ammend their accounting practices.  Different business groups can 
share the same pool if necessary.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Richard Elling

On Aug 5, 2009, at 2:58 PM, Brian Kolaci wrote:

I'm chiming in late, but have a mission critical need of this as  
well and posted as a non-member before.  My customer was wondering  
when this would make it into Solaris 10.  Their complete adoption  
depends on it.


I have a customer that is trying to move from VxVM/VxFS to ZFS,  
however they have this same need.  They want to save money and move  
to ZFS.  They are charged by a separate group for their SAN storage  
needs.  The business group storage needs grow and shrink over time,  
as it has done for years.  They've been on E25K's and other high  
power boxes with VxVM/VxFS as their encapsulated root disk for over  
a decade.  They are/were a big Veritas shop.  They rarely ever use  
UFS, especially in production.


They absolutely require the shrink functionality to completely move  
off VxVM/VxFS to ZFS, and we're talking $$millions.  I think your  
statements below are from a technology standpoint, not a business  
standpoint.


If you look at it from Sun's business perspective, ZFS is $$ free, so  
Sun gains

no $$ millions by replacing VxFS. Indeed, if the customer purchases VxFS
from Sun, it makes little sense for Sun to eliminate a revenue source.  
OTOH,
I'm sure if they are willing to give Sun $$ millions, it can help  
raise the

priority of CR 4852783.
http://bugs.opensolaris.org/view_bug.do?bug_id=4852783

You say its poor planning, which is way off the mark.  Business  
needs change daily.  It takes several weeks to provision SAN with  
all the approvals, etc. and it it takes massive planning.  That goes  
for increasing as well as decreasing their storage needs.


I think you've identified the real business problem. A shrink feature  
in ZFS will
do nothing to fix this. A business who's needs change faster than  
their ability
to react has (as we say in business school) an unsustainable business  
model.

 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Brian Kolaci

I'm chiming in late, but have a mission critical need of this as well and 
posted as a non-member before.  My customer was wondering when this would make 
it into Solaris 10.  Their complete adoption depends on it.

I have a customer that is trying to move from VxVM/VxFS to ZFS, however they 
have this same need.  They want to save money and move to ZFS.  They are 
charged by a separate group for their SAN storage needs.  The business group 
storage needs grow and shrink over time, as it has done for years.  They've 
been on E25K's and other high power boxes with VxVM/VxFS as their encapsulated 
root disk for over a decade.  They are/were a big Veritas shop.  They rarely 
ever use UFS, especially in production.

They absolutely require the shrink functionality to completely move off 
VxVM/VxFS to ZFS, and we're talking $$millions.  I think your statements below 
are from a technology standpoint, not a business standpoint.  You say its poor 
planning, which is way off the mark.  Business needs change daily.  It takes 
several weeks to provision SAN with all the approvals, etc. and it it takes 
massive planning.  That goes for increasing as well as decreasing their storage 
needs.

Richard Elling wrote:

On Aug 5, 2009, at 1:06 PM, Martin wrote:


richard wrote:

Preface: yes, shrink will be cool.  But we've been
running highly
available,
mission critical datacenters for more than 50 years
without shrink being
widely available.


I would debate that.  I remember batch windows and downtime delaying 
one's career movement.  Today we are 24x7 where an outage can kill an 
entire business


Agree.


Do it exactly the same way you do it for UFS.  You've
been using UFS
for years without shrink, right?  Surely you have
procedures in
place :-)


While I haven't taken a formal survey, everywhere I look I see JFS on 
AIX and VxFS on Solaris.  I haven't been in a production UFS shop this 
decade.


Then why are you talking on Solaris forum?  All versions of
Solaris prior to Solaris 10 10/08 only support UFS for boot.


Backout plans are not always simple reversals.  A
well managed site will
have procedures for rolling upgrades.


I agree with everything you wrote.  Today other technologies allow 
live changes to the pool, so companies use those technologies instead 
of ZFS.


... and can continue to do so. If you are looking to replace a
for-fee product with for-free, then you need to consider all
ramifications. For example, a shrink causes previously written
data to be re-written, thus exposing the system to additional
failure modes. OTOH, a model of place once and never disrupt
can provide a more reliable service. You will see the latter
"pattern" repeated often for high assurance systems.




There is more than one way to skin a cat.


Which entirely misses the point.


Many cases where people needed to shrink were due to the
inability to plan for future growth. This is compounded by the
rather simplistic interface between a logical volume and traditional
file system. ZFS allows you to dynamically grow the pool, so you
can implement a process of only adding storage as needs dictate.

Bottom line: shrink will be cool, but it is not the perfect solution for
managing changing data needs in a mission critical environment.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Richard Elling

On Aug 5, 2009, at 1:06 PM, Martin wrote:


richard wrote:

Preface: yes, shrink will be cool.  But we've been
running highly
available,
mission critical datacenters for more than 50 years
without shrink being
widely available.


I would debate that.  I remember batch windows and downtime delaying  
one's career movement.  Today we are 24x7 where an outage can kill  
an entire business


Agree.


Do it exactly the same way you do it for UFS.  You've
been using UFS
for years without shrink, right?  Surely you have
procedures in
place :-)


While I haven't taken a formal survey, everywhere I look I see JFS  
on AIX and VxFS on Solaris.  I haven't been in a production UFS shop  
this decade.


Then why are you talking on Solaris forum?  All versions of
Solaris prior to Solaris 10 10/08 only support UFS for boot.


Backout plans are not always simple reversals.  A
well managed site will
have procedures for rolling upgrades.


I agree with everything you wrote.  Today other technologies allow  
live changes to the pool, so companies use those technologies  
instead of ZFS.


... and can continue to do so. If you are looking to replace a
for-fee product with for-free, then you need to consider all
ramifications. For example, a shrink causes previously written
data to be re-written, thus exposing the system to additional
failure modes. OTOH, a model of place once and never disrupt
can provide a more reliable service. You will see the latter
"pattern" repeated often for high assurance systems.




There is more than one way to skin a cat.


Which entirely misses the point.


Many cases where people needed to shrink were due to the
inability to plan for future growth. This is compounded by the
rather simplistic interface between a logical volume and traditional
file system. ZFS allows you to dynamically grow the pool, so you
can implement a process of only adding storage as needs dictate.

Bottom line: shrink will be cool, but it is not the perfect solution for
managing changing data needs in a mission critical environment.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Jacob Ritorto
Interesting, this is the same procedure I invented (with the exception 
that the zfs send came from the net) and used to hack OpenSolaris 
2009.06 onto my home SunBlade 2000 since it couldn't do AI due to low 
OBP rev..


I'll have to rework it this way, then, which will unfortunately cause 
downtime for a multitude of dependent services, affect the entire 
universe here and make my department look inept.  As much as it stings, 
I accept that this is the price I pay for adopting a new technology. 
Acknowledge and move on.  Quite simply, if this happens too often, we 
know we've made the wrong decision on vendor/platform.


Anyway, looking forward to shrink.  Thanks for the tips.


Kyle McDonald wrote:

Kyle McDonald wrote:

Jacob Ritorto wrote:
Is this implemented in OpenSolaris 2008.11?  I'm moving move my 
filer's rpool to an ssd mirror to free up bigdisk slots currently 
used by the os and need to shrink rpool from 40GB to 15GB. (only 
using 2.7GB for the install).


  
Your best bet would be to install the new ssd drives, create a new 
pool, snapshot the exisitng pool and use ZFS send/recv to migrate the 
data to the new pool. There are docs around about how install grub and 
the boot blocks on the new devices also. After that remove (export!, 
don't destroy yet!)

the old drives, and reboot to see how it works.

If you have no problems, (and I don't think there's anything technical 
that would keep this from working,) then you're good. Otherwise put 
the old pool back in. :)


This thread dicusses basically this same thing - he had a problem along 
the way, but Cindy answered it.



Hi Nawir,

I haven't tested these steps myself, but the error message
means that you need to set this property:

# zpool set bootfs=rpool/ROOT/BE-name rpool

Cindy

On 08/05/09 03:14, nawir wrote:
Hi,

I have sol10u7 OS with 73GB HD in c1t0d0.
I want to clone it to 36GB HD

These steps below is what come in my mind
STEPS TAKEN
# zpool create -f altrpool c1t1d0s0
# zpool set listsnapshots=on rpool
# SNAPNAME=`date +%Y%m%d`
# zfs snapshot -r rpool/r...@$snapname
# zfs list -t snapshot
# zfs send -R rp...@$snapname | zfs recv -vFd altrpool
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk 
/dev/rdsk/c1t1d0s0

for x86 do
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
# zpool export altrpool
# init 5
remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
-insert solaris10 dvd
ok boot cdrom -s
# zpool import altrpool rpool
# init 0
ok boot disk1

ERROR:
Rebooting with command: boot disk1
Boot device: /p...@1c,60/s...@2/d...@1,0  File and args:
no pool_props
Evaluating:
The file just loaded does not appear to be executable.
ok

QUESTIONS:
1. what's wrong what my steps
2. any better idea

thanks 

-Kyle




 -Kyle


thx
jake
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Martin
richard wrote:
> Preface: yes, shrink will be cool.  But we've been
> running highly  
> available,
> mission critical datacenters for more than 50 years
> without shrink being
> widely available.

I would debate that.  I remember batch windows and downtime delaying one's 
career movement.  Today we are 24x7 where an outage can kill an entire business.

> Do it exactly the same way you do it for UFS.  You've
> been using UFS
> for years without shrink, right?  Surely you have
> procedures in  
> place :-)

While I haven't taken a formal survey, everywhere I look I see JFS on AIX and 
VxFS on Solaris.  I haven't been in a production UFS shop this decade.

> Backout plans are not always simple reversals.  A
> well managed site will
> have procedures for rolling upgrades.

I agree with everything you wrote.  Today other technologies allow live changes 
to the pool, so companies use those technologies instead of ZFS.

> There is more than one way to skin a cat.

Which entirely misses the point.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Kyle McDonald

Kyle McDonald wrote:

Jacob Ritorto wrote:
Is this implemented in OpenSolaris 2008.11?  I'm moving move my 
filer's rpool to an ssd mirror to free up bigdisk slots currently 
used by the os and need to shrink rpool from 40GB to 15GB. (only 
using 2.7GB for the install).


  
Your best bet would be to install the new ssd drives, create a new 
pool, snapshot the exisitng pool and use ZFS send/recv to migrate the 
data to the new pool. There are docs around about how install grub and 
the boot blocks on the new devices also. After that remove (export!, 
don't destroy yet!)

the old drives, and reboot to see how it works.

If you have no problems, (and I don't think there's anything technical 
that would keep this from working,) then you're good. Otherwise put 
the old pool back in. :)


This thread dicusses basically this same thing - he had a problem along 
the way, but Cindy answered it.



Hi Nawir,

I haven't tested these steps myself, but the error message
means that you need to set this property:

# zpool set bootfs=rpool/ROOT/BE-name rpool

Cindy

On 08/05/09 03:14, nawir wrote:
Hi,

I have sol10u7 OS with 73GB HD in c1t0d0.
I want to clone it to 36GB HD

These steps below is what come in my mind
STEPS TAKEN
# zpool create -f altrpool c1t1d0s0
# zpool set listsnapshots=on rpool
# SNAPNAME=`date +%Y%m%d`
# zfs snapshot -r rpool/r...@$snapname
# zfs list -t snapshot
# zfs send -R rp...@$snapname | zfs recv -vFd altrpool
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk 
/dev/rdsk/c1t1d0s0

for x86 do
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
# zpool export altrpool
# init 5
remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
-insert solaris10 dvd
ok boot cdrom -s
# zpool import altrpool rpool
# init 0
ok boot disk1

ERROR:
Rebooting with command: boot disk1
Boot device: /p...@1c,60/s...@2/d...@1,0  File and args:
no pool_props
Evaluating:
The file just loaded does not appear to be executable.
ok

QUESTIONS:
1. what's wrong what my steps
2. any better idea

thanks 

-Kyle




 -Kyle


thx
jake
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Kyle McDonald

Jacob Ritorto wrote:

Is this implemented in OpenSolaris 2008.11?  I'm moving move my filer's rpool 
to an ssd mirror to free up bigdisk slots currently used by the os and need to 
shrink rpool from 40GB to 15GB. (only using 2.7GB for the install).

  
Your best bet would be to install the new ssd drives, create a new pool, 
snapshot the exisitng pool and use ZFS send/recv to migrate the data to 
the new pool. There are docs around about how install grub and the boot 
blocks on the new devices also. After that remove (export!, don't 
destroy yet!)

the old drives, and reboot to see how it works.

If you have no problems, (and I don't think there's anything technical 
that would keep this from working,) then you're good. Otherwise put the 
old pool back in. :)



 -Kyle


thx
jake
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Kyle McDonald

Martin wrote:

C,

I appreciate the feedback and like you, do not wish to start a side rant, but 
rather understand this, because it is completely counter to my experience.

Allow me to respond based on my anecdotal experience.

  

What's wrong with make a new pool.. safely copy the data. verify data
and then delete the old pool..



You missed a few steps.  The actual process would be more like the following.
1. Write up the steps and get approval from all affected parties
-- In truth, the change would not make it past step 1.
  

Maybe, but maybe not see below...

2. Make a new pool
3. Quiesce the pool and cause a TOTAL outage during steps 4 through 9
  
That's not entirely true. You can use ZFS send/recv to do the major 
first pass of  #4  (and #5 against the snapshot) Live before the total 
outage.
Then after you quiesce everything, you could use an incremental 
send/recv copy the changes since then quickly, reducing down time.


I'd probably run a second full verify anyway, but in theory, I beleive 
the ZFS checksums are used in the send/recv process to ensure that there 
isn't any corruption, so after enough positive experience, I might start 
to skip the second verify.


This should greatly reduce the length of the down time.


Everyone.

  

and then one day [months or years later] wants to shrink it...



Business needs change.  Technology changes.  The project was a pilot and 
canceled.  The extended pool didn't meet verification requirements, e,g, 
performance and the change must be backed out.
In an Enterprise, a change for performance should have been tested on 
another identical non-production system before being implemented on the 
production one.


I'd have to concur there's more useful things out there. OTOH... 



That's probably true and I have not seen the priority list.  I was merely amazed at the 
number of "Enterprises don't need this functionality" posts.

  
All that said, as a personal home user, this is a feature I'm hoping for 
all the time. :)


 -Kyle


Thanks again,
Marty
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Richard Elling
Preface: yes, shrink will be cool.  But we've been running highly  
available,

mission critical datacenters for more than 50 years without shrink being
widely available.

On Aug 5, 2009, at 9:17 AM, Martin wrote:

You are the 2nd customer I've ever heard of to use shrink.


This attitude seems to be a common theme in ZFS discussions: "No  
enterprise uses shrink, only grow."


Maybe.  The enterprise I work for requires that every change be  
reversible and repeatable.  Every change requires a backout plan and  
that plan better be fast and nondisruptive.


Do it exactly the same way you do it for UFS.  You've been using UFS
for years without shrink, right?  Surely you have procedures in  
place :-)


Who are these enterprise admins who can honestly state that they  
have no requirement to reverse operations?


Backout plans are not always simple reversals.  A well managed site will
have procedures for rolling upgrades.

Who runs a 24x7 storage system and will look you in the eye and  
state, "The storage decisions (parity count, number of devices in a  
stripe, etc.) that I make today will be valid until the end of time  
and will NEVER need nondisruptive adjustment.  Every storage  
decision I made in 1993 when we first installed RAID is still  
correct and has needed no changes despite changes in our business  
models."


My experience is that this attitude about enterprise storage borders  
on insane.


Something does not compute.


There is more than one way to skin a cat.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Jacob Ritorto
+1

Thanks for putting this in a real world perspective, Martin.  I'm faced with 
this exact circumstance right now (see my post to the list from earlier today). 
 Our ZFS filers are highly utilised, highly trusted components at the core of 
our enterprise and serve out OS images, mail storage, customer facing NFS 
mounts, CIFS mounts, etc. for nearly all of our critical services.  Downtime 
is, essentially, a catastrophe and won't get approval without weeks of 
painstaking social engineering..

jake
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Martin
C,

I appreciate the feedback and like you, do not wish to start a side rant, but 
rather understand this, because it is completely counter to my experience.

Allow me to respond based on my anecdotal experience.

> What's wrong with make a new pool.. safely copy the data. verify data
> and then delete the old pool..

You missed a few steps.  The actual process would be more like the following.
1. Write up the steps and get approval from all affected parties
-- In truth, the change would not make it past step 1.
2. Make a new pool
3. Quiesce the pool and cause a TOTAL outage during steps 4 through 9
4. Safely make a copy of the data
5. Verify the data
6. Export old pool
7. Import new pool
8. Restart server
9. Confirm all services are functioning correctly
10. Announce the outage has finished
11. Delete the old pool

Note step 3 and let me know which 24x7 operation would tolerate an extended 
outage (because it would last for hours or days) on a critical production 
server.

One solution is not to do this on a critical enterprise storage, and that's the 
point I am trying to make.

> Who in the enterprise just allocates a
> massive pool

Everyone.

> and then one day [months or years later] wants to shrink it...

Business needs change.  Technology changes.  The project was a pilot and 
canceled.  The extended pool didn't meet verification requirements, e,g, 
performance and the change must be backed out.  Business growth estimates are 
grossly too high and the pool needs migration to a cheaper frame in order to 
keep costs in line with revenue.  The pool was made of 40 of the largest disks 
at the time and now, 4 years later, only 10 disks are needed to accomplish the 
same thing while the 40 original disks are at EOL and no longer supported.

The list goes on and on.

> I'd have to concur there's more useful things out there. OTOH... 

That's probably true and I have not seen the priority list.  I was merely 
amazed at the number of "Enterprises don't need this functionality" posts.

Thanks again,
Marty
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread C. Bergström

Martin wrote:

You are the 2nd customer I've ever heard of to use shrink.



This attitude seems to be a common theme in ZFS discussions: "No enterprise uses 
shrink, only grow."

Maybe.  The enterprise I work for requires that every change be reversible and 
repeatable.  Every change requires a backout plan and that plan better be fast 
and nondisruptive.

Who are these enterprise admins who can honestly state that they have no requirement to 
reverse operations?  Who runs a 24x7 storage system and will look you in the eye and 
state, "The storage decisions (parity count, number of devices in a stripe, etc.) 
that I make today will be valid until the end of time and will NEVER need nondisruptive 
adjustment.  Every storage decision I made in 1993 when we first installed RAID is still 
correct and has needed no changes despite changes in our business models."

My experience is that this attitude about enterprise storage borders on insane.
  
What's wrong with make a new pool.. safely copy the data. verify data 
and then delete the old pool..   Who in the enterprise just allocates a 
massive pool and then one day wants to shrink it...  For home nas I 
could see this being useful.. I'm not aruging there isn't a use case, 
but in terms of where my vote for time/energy of the developers goes.. 
I'd have to concur there's more useful things out there.  OTOH... 
once/if the block reallocation code is dropped (webrev?) the shrinking 
of a pool should be a lot easier.  I don't mean to go off on a side 
rant, but afaik this code is written and should have been available.  If 
we all pressured Green-bytes with an open letter it would maybe 
help..  The legal issues around this are what's holding it all up.  
@Sun people can't comment I'm sure, but this is what I speculate.


./C

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Martin
> You are the 2nd customer I've ever heard of to use shrink.

This attitude seems to be a common theme in ZFS discussions: "No enterprise 
uses shrink, only grow."

Maybe.  The enterprise I work for requires that every change be reversible and 
repeatable.  Every change requires a backout plan and that plan better be fast 
and nondisruptive.

Who are these enterprise admins who can honestly state that they have no 
requirement to reverse operations?  Who runs a 24x7 storage system and will 
look you in the eye and state, "The storage decisions (parity count, number of 
devices in a stripe, etc.) that I make today will be valid until the end of 
time and will NEVER need nondisruptive adjustment.  Every storage decision I 
made in 1993 when we first installed RAID is still correct and has needed no 
changes despite changes in our business models."

My experience is that this attitude about enterprise storage borders on insane.

Something does not compute.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Jacob Ritorto
Is this implemented in OpenSolaris 2008.11?  I'm moving move my filer's rpool 
to an ssd mirror to free up bigdisk slots currently used by the os and need to 
shrink rpool from 40GB to 15GB. (only using 2.7GB for the install).

thx
jake
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2009-07-30 Thread Kyle McDonald

Ralf Gans wrote:


Jumpstart puts a loopback mount into the vfstab,
and the next boot fails.

The Solaris will do the mountall before ZFS starts,
so the filesystem service fails and you have not even
an sshd to login over the network.
  
This is why I don't use the mountpoint settings in ZFS. I set them all 
to 'legacy', and put them in the /etc/vfstab myself.


I keep many .ISO files on a ZFS filesystem, and I LOFI mount them onto 
subdirectories of the same ZFS tree, and then (since they are for 
Jumpstart) loop back mount parts of eacch of the ISO's into /tftpboot


When you've got to manage all this other stuff in /etc/vfstab ayway, 
it's easier to manage ZFS there too. I don't see it as a hardship, and I 
don't see the value of doing it in ZFS to be honest (unless every 
filesystem you have is in ZFS maybe.)


The same with sharing this stuff through NFS. I since the LOFI mounts 
are separate filesystems, I have to share them with share (or sharemgr) 
and it's easier to share the ZFS diretories through those commands at 
the same time.


I must be missing something, but I'm not sure I get the rationale behind 
duplicating all this admin stuff inside ZFS.


 -Kyle

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2009-07-30 Thread Ralf Gans
Hello there,

I'm working for a bigger customer in germany.
The customer ist some thousend TB big.

The information that the zpool shrink feature will not be implemented soon
is no problem, we just keep using Veritas Storage Foundation.

Shirinking a pool is not the only problem with ZFS,
try setting up a jumpstart Server with Solaris 10u7
with the media copy on a separate zfs filesystem.

Jumpstart puts a loopback mount into the vfstab,
and the next boot fails.

The Solaris will do the mountall before ZFS starts,
so the filesystem service fails and you have not even
an sshd to login over the network.

Viele Grüße,

rapega
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-21 Thread Chris Siebenmann
| The errant command which accidentally adds a vdev could just as easily
| be a command which scrambles up or erases all of the data.

 The difference between a mistaken command that accidentally adds a vdev
and the other ways to loose your data with ZFS is that the 'add a vdev'
accident is only one omitted word different from a command that you use
routinely. This is a very close distance, especially for fallible humans.

('zpool add ... mirror A B' and 'zpool add ... spare A'; omit either
'mirror' or 'spare' by accident and boom.)

- cks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-21 Thread Mertol Ozyoney
Can ADM ease the pain by migrating data only from one pool to the other. I
know it's not what most of you want but... 


Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED]



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Will Murnane
Sent: Thursday, August 21, 2008 1:57 AM
To: Bob Friesenhahn
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] shrinking a zpool - roadmap

On Wed, Aug 20, 2008 at 18:40, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> The errant command which accidentally adds a vdev could just as easily
> be a command which scrambles up or erases all of the data.
True enough---but if there's a way to undo accidentally adding a vdev,
there's one source of disastrously bad human error eliminated.  If the
vdev is removable, then typing "zpool evacuate c3t4d5" to fix the
problem instead of getting backups up to date, destroying and
recreating the pool, then restoring from backups saves quite a bit of
the cost associated with human error in this case.

Think of it as the analogue of "zpool import -D": if you screw up, ZFS
has a provision to at least try to help.  The recent discussion on
accepting partial 'zfs recv' streams is a similar measure.  No system
is perfectly resilient to human error, but any simple ways in which
the resilience (especially of such a large unit as a pool!) can be
improved should be considered.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Kyle McDonald
Zlotnick Fred wrote:
> On Aug 20, 2008, at 6:39 PM, Kyle McDonald wrote:
>
>>
>> My suggestion still remains though. Log your enterprises wish for this
>> feature through as many channels as you have into Sun. This list, Sales,
>> Support, every way you can think of. Get it documented, so that when
>> they go to set priorities on RFE's there'll be more data on this one.
>
> Knock yourself out, but it's really unnecessary.  As has been amply
> documented, on this thread and others, this is already a very high
> priority for us.  It just happens to be rather difficult to do it right.
> We're working on it.  We've heard the message (years ago, actually, just
> about as soon as we shipped ZFS in S10 6/06.)  Your further encouragement
> is appreciated, but it's unlikely to speed up what already is deemed
> a high priority.
>
Cool. I love it when I'm wrong this way. :)

I don't know where I got it but I really thought it wasn't seen as a big 
deal for the larger storage customers.
Glad to see I'm wrong, because it's a real big deal for us little guys. :)

  -Kyle

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Zlotnick Fred
On Aug 20, 2008, at 6:39 PM, Kyle McDonald wrote:

> John wrote:
>> Our "enterprise" is about 300TB.. maybe a bit more...
>>
>> You are correct that most of the time we grow and not shrink...  
>> however, we are fairly dynamic and occasionally do shrink. DBA's  
>> have been known to be off on their space requirements/requests.
>>
>>
> For the record I agree with you and I'm wating for this feature  
> also. I
> was only citing my recolection of the explanatin given in the past.
>
> To add more from my memory, I think the 'Enterprise grows not shrinks'
> idea is coming from the idea that in ZFS you should be creating fewer
> data pools from a few different specific sized LUNs, and using ZFS to
> allocate filesystems and zVOL's from the pool, instead of customizing
> LUN sizes to create more pools each for different purposes. If true,  
> (if
> you can make all your LUNs one size, and make a few [prefferably one I
> think] data zPool per server host.) then the need to reduce pool  
> size is
> diminished.
>
> That's not realistic in the home/hobby/developer market, and I'm not
> convinced that's realistic in the enterprise either.
>> There is also the human error factor.  If someone accidentally  
>> grows a zpool there is no easy way to recover that space without  
>> down time.  Some of my LUNs are in the 1TB range and if that gets  
>> added to the wrong zpool that space is basically stuck there until  
>> i can get a maintenance window. And then I'm not that's even  
>> possible since my windows are only 3 hours... for example what if I  
>> add a LUN to 20TB zpool.  What would I do to remove the LUN?  I  
>> think I would have to create a new 20TB pool and move the data from  
>> the original to the new zpool... so that would assume I have a free  
>> 20TB and the down time
>>
>>
> I agree here also, even with a single zpool per server. Consider a
> policy where when the pool grows you always add a RAIDz2 of 10 200GB
> LUNs. So your single data pool is currently 3 of these RAIDz2 vdevs,  
> and
> an admin goes to add 10 more, but forgets the 'raidz2, so you end up
> with 3 RAIDz2, and 10 single LUN non redundant vdevs. How do you fix  
> that?
>
> My suggestion still remains though. Log your enterprises wish for this
> feature through as many channels as you have into Sun. This list,  
> Sales,
> Support, every way you can think of. Get it documented, so that when
> they go to set priorities on RFE's there'll be more data on this one.

Knock yourself out, but it's really unnecessary.  As has been amply
documented, on this thread and others, this is already a very high
priority for us.  It just happens to be rather difficult to do it right.
We're working on it.  We've heard the message (years ago, actually, just
about as soon as we shipped ZFS in S10 6/06.)  Your further  
encouragement
is appreciated, but it's unlikely to speed up what already is deemed
a high priority.

My 2 cents,
Fred
>
>
>  -Kyle
>>
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Fred Zlotnick
Senior Director, Open Filesystem and Sharing Technologies
Sun Microsystems, Inc.
[EMAIL PROTECTED]
x81142/+1 650 352 9298








___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Kyle McDonald
John wrote:
> Our "enterprise" is about 300TB.. maybe a bit more...
>
> You are correct that most of the time we grow and not shrink... however, we 
> are fairly dynamic and occasionally do shrink. DBA's have been known to be 
> off on their space requirements/requests.
>
>   
For the record I agree with you and I'm wating for this feature also. I 
was only citing my recolection of the explanatin given in the past.

To add more from my memory, I think the 'Enterprise grows not shrinks' 
idea is coming from the idea that in ZFS you should be creating fewer 
data pools from a few different specific sized LUNs, and using ZFS to 
allocate filesystems and zVOL's from the pool, instead of customizing 
LUN sizes to create more pools each for different purposes. If true, (if 
you can make all your LUNs one size, and make a few [prefferably one I 
think] data zPool per server host.) then the need to reduce pool size is 
diminished.

That's not realistic in the home/hobby/developer market, and I'm not 
convinced that's realistic in the enterprise either.
> There is also the human error factor.  If someone accidentally grows a zpool 
> there is no easy way to recover that space without down time.  Some of my 
> LUNs are in the 1TB range and if that gets added to the wrong zpool that 
> space is basically stuck there until i can get a maintenance window. And then 
> I'm not that's even possible since my windows are only 3 hours... for example 
> what if I add a LUN to 20TB zpool.  What would I do to remove the LUN?  I 
> think I would have to create a new 20TB pool and move the data from the 
> original to the new zpool... so that would assume I have a free 20TB and the 
> down time
>  
>   
I agree here also, even with a single zpool per server. Consider a 
policy where when the pool grows you always add a RAIDz2 of 10 200GB 
LUNs. So your single data pool is currently 3 of these RAIDz2 vdevs, and 
an admin goes to add 10 more, but forgets the 'raidz2, so you end up 
with 3 RAIDz2, and 10 single LUN non redundant vdevs. How do you fix that?

My suggestion still remains though. Log your enterprises wish for this 
feature through as many channels as you have into Sun. This list, Sales, 
Support, every way you can think of. Get it documented, so that when 
they go to set priorities on RFE's there'll be more data on this one.

  -Kyle
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Aaron Blew
I've heard (though I'd be really interested to read the studies if someone
has a link) that a lot of this human error percentage comes at the hardware
level.  Replacing the wrong physical disk in a RAID-5 disk group, bumping
cables, etc.

-Aaron



On Wed, Aug 20, 2008 at 3:40 PM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:

> On Wed, 20 Aug 2008, Miles Nordin wrote:
>
> >> "j" == John  <[EMAIL PROTECTED]> writes:
> >
> > j> There is also the human error factor.  If someone accidentally
> > j> grows a zpool
> >
> > or worse, accidentally adds an unredundant vdev to a redundant pool.
> > Once you press return, all you can do is scramble to find mirrors for
> > it.
>
> Not to detract from the objective to be able to re-shuffle the zfs
> storage layout, any system administration related to storage is risky
> business.  Few people should be qualified to do it.  Studies show that
> 36% of data loss is due to human error.  Once zfs mirroring, raidz, or
> raidz2 are used to virtually eliminate loss due to hardware or system
> malfunction, this 36% is increased to a much higher percentage.  For
> example, if loss due to hardware or system malfunction is reduced to
> just 1% (still a big number) then the human error factor is increased
> to a wopping 84%.  Humans are like a ticking time bomb for data.
>
> The errant command which accidentally adds a vdev could just as easily
> be a command which scrambles up or erases all of the data.
>
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread paul
Kyle wrote:
> ... If I recall, the low priority  was based on the percieved low demand
> for the feature in enterprise organizations. As I understood it shrinking a
> pool is percieved as being a feature most desired by home/hobby/development
> users, and that  enterprises mainly only grow thier pools, not shrink.

Although it's historically clear that data tends to grow to fill available 
storable, it
should be equally clear that as storage resources are neither free nor 
inexhaustible;
the flexibility to redeploy existing storage resources from one system to 
another
considered more critical may simply necessitate the ability to reduce resources 
in
one pool for transfer to another, and therefore should be easily and efficiently
accomplishable without having to otherwise manually shuffle data around like a
shell game between multiple storage configurations to achieve the desired 
results.

Seemingly equally valid, if its determined that some storage resources within a
pool are beginning to potentially fail and their storage is not literally 
required
at the moment, it would seem like a good idea to have any data which they
may contain moved to other resources in the pool, and simply remove them
as a storage candidate without having to replace them with alternates which
may simply not physically exist at the moment.

That being said, fixing bugs which would otherwise render the zfs file system
unreliable should always trump "nice to have features".
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Will Murnane
On Wed, Aug 20, 2008 at 18:40, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> The errant command which accidentally adds a vdev could just as easily
> be a command which scrambles up or erases all of the data.
True enough---but if there's a way to undo accidentally adding a vdev,
there's one source of disastrously bad human error eliminated.  If the
vdev is removable, then typing "zpool evacuate c3t4d5" to fix the
problem instead of getting backups up to date, destroying and
recreating the pool, then restoring from backups saves quite a bit of
the cost associated with human error in this case.

Think of it as the analogue of "zpool import -D": if you screw up, ZFS
has a provision to at least try to help.  The recent discussion on
accepting partial 'zfs recv' streams is a similar measure.  No system
is perfectly resilient to human error, but any simple ways in which
the resilience (especially of such a large unit as a pool!) can be
improved should be considered.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Bob Friesenhahn
On Wed, 20 Aug 2008, Miles Nordin wrote:

>> "j" == John  <[EMAIL PROTECTED]> writes:
>
> j> There is also the human error factor.  If someone accidentally
> j> grows a zpool
>
> or worse, accidentally adds an unredundant vdev to a redundant pool.
> Once you press return, all you can do is scramble to find mirrors for
> it.

Not to detract from the objective to be able to re-shuffle the zfs 
storage layout, any system administration related to storage is risky 
business.  Few people should be qualified to do it.  Studies show that 
36% of data loss is due to human error.  Once zfs mirroring, raidz, or 
raidz2 are used to virtually eliminate loss due to hardware or system 
malfunction, this 36% is increased to a much higher percentage.  For 
example, if loss due to hardware or system malfunction is reduced to 
just 1% (still a big number) then the human error factor is increased 
to a wopping 84%.  Humans are like a ticking time bomb for data.

The errant command which accidentally adds a vdev could just as easily 
be a command which scrambles up or erases all of the data.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Ian Collins
John wrote:
> Our "enterprise" is about 300TB.. maybe a bit more...
>
> You are correct that most of the time we grow and not shrink... however, we 
> are fairly dynamic and occasionally do shrink. DBA's have been known to be 
> off on their space requirements/requests.
>
>   
Isn't that one of the problems ZFS solves? Grow the pool to meet the
demand rather than size it for the estimated maximum usage. Even
exported vdevs can be thin provisioned.

Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Miles Nordin
> "j" == John  <[EMAIL PROTECTED]> writes:

 j> There is also the human error factor.  If someone accidentally
 j> grows a zpool 

or worse, accidentally adds an unredundant vdev to a redundant pool.
Once you press return, all you can do is scramble to find mirrors for
it.

vdev removal is also neeeded to, for example, change each vdev in a
big pool of JBOD devices from mirroring to raidz2.  in general, for
reconfiguring pools' layouts without outage, not just shrinking.  
This online-layout-reconfig is also a Veritas selling point, yes?, or
is that Veritas feature considered too risky for actual use?

For my home user setup, the ability to grow a single vdev by replacing
all the disks within it with bigger ones, then export/import, is
probably good enough.  Note however this is still not quite ``online''
because export/import is needed to claim the space.  Though IIRC some
post here said that's fixed in the latest Nevadas, one would have to
look at the whole stack to make sure it's truly online---can FC and
iSCSI gracefully handle a target's changing size and report it to ZFS,
or does FC/iSCSI need to be whacked, or is size change only noticed at
zpool replace/attach time?

The thing that really made me wish for 'pvmove' / RFE 4852783 at home
so far is the recovering-from-mistaken-add scenario.


pgpQvOcQ6F2w3.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread John
Our "enterprise" is about 300TB.. maybe a bit more...

You are correct that most of the time we grow and not shrink... however, we are 
fairly dynamic and occasionally do shrink. DBA's have been known to be off on 
their space requirements/requests.

There is also the human error factor.  If someone accidentally grows a zpool 
there is no easy way to recover that space without down time.  Some of my LUNs 
are in the 1TB range and if that gets added to the wrong zpool that space is 
basically stuck there until i can get a maintenance window. And then I'm not 
that's even possible since my windows are only 3 hours... for example what if I 
add a LUN to 20TB zpool.  What would I do to remove the LUN?  I think I would 
have to create a new 20TB pool and move the data from the original to the new 
zpool... so that would assume I have a free 20TB and the down time
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Kyle McDonald
Mario Goebbels wrote:
>> WOW! This is quite a departure from what we've been
>> told for the past 2 years...
>> 
>
> This must be misinformation.
>
> The reason there's no project (yet) is very likely because pool shrinking 
> depends strictly on the availability of bp_rewrite functionality, which is 
> still in development.
>
> The last time the topic came up, maybe a few months ago, still in 2008, the 
> discussion indicated that it's still on the plan. But as said, it relies on 
> aforementioned functionality to be present.
>
>   
I agree, it's on the plan, but in addition to the dependency on that 
feature it was at a very low priority.  If I recall, the low priority 
was based on the percieved low demand for the feature in enterprise 
organizations. As I understood it shrinking a pool is percieved as being 
a feature most desired by home/hobby/development users, and that 
enterprises mainly only grow thier pools, not shrink.

So if anyone in enterprise has need to shrink pools they might want to 
notify thier Sun support people, and make their voices heard.

Unless of course I'm wrong Which has been known to happen from time 
to time. :)

  -Kyle

> -mg
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Mario Goebbels
> WOW! This is quite a departure from what we've been
> told for the past 2 years...

This must be misinformation.

The reason there's no project (yet) is very likely because pool shrinking 
depends strictly on the availability of bp_rewrite functionality, which is 
still in development.

The last time the topic came up, maybe a few months ago, still in 2008, the 
discussion indicated that it's still on the plan. But as said, it relies on 
aforementioned functionality to be present.

-mg
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread John
WOW! This is quite a departure from what we've been told for the past 2 years...

In fact if your comments are true that we'll never be able to shrink a ZFS 
pool, i will be, for lack of a better word, PISSED.  

Like others not being able to shrink is a feature that truly prevents us from 
replacing all of our Veritas... without being able to shrink, ZFS will be stuck 
in our dev environment and our non-critical systems...
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-18 Thread Tim
Long story short,

There isn't a project, there are no plans to start a project, and don't
expect to see it in Solaris10 in this lifetime without some serious pushback
from large Sun customers.  Even then, it's unlikely to happen anytime soon
due to the technical complications of doing so reliably.

--Tim




On Mon, Aug 18, 2008 at 6:06 AM, Bernhard Holzer <[EMAIL PROTECTED]>wrote:

> Hi,
>
> I am searching for a roadmap for shrinking a pool. Is there some
> project, where can I find informations, when will it be implemented in
> Solars10
>
> Thanks
> Regards
> Bernhard
>
> --
> Bernhard Holzer
> Sun Microsystems Ges.m.b.H.
> Wienerbergstraße 3/7
> A-1100 Vienna, Austria
> Phone x60983/+43 1 60563 11983
> Mobile +43 664 60563 11983
> Fax +43 1 60563  11920
> Email [EMAIL PROTECTED]
> Handelsgericht Wien, Firmenbuch-Nr. FN 186250 y
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] shrinking a zpool - roadmap

2008-08-18 Thread Bernhard Holzer
Hi,

I am searching for a roadmap for shrinking a pool. Is there some 
project, where can I find informations, when will it be implemented in 
Solars10

Thanks
Regards
Bernhard

-- 
Bernhard Holzer
Sun Microsystems Ges.m.b.H.
Wienerbergstraße 3/7
A-1100 Vienna, Austria 
Phone x60983/+43 1 60563 11983
Mobile +43 664 60563 11983
Fax +43 1 60563  11920
Email [EMAIL PROTECTED]
Handelsgericht Wien, Firmenbuch-Nr. FN 186250 y


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2008-06-04 Thread stefano audenino
> Hi, I know there is no single-command way to shrink a
> zpool (say evacuate the data from a disk and then
> remove the disk from a pool), but is there a logical
> way? I.e mirrror the pool to a smaller pool and then
> split the mirror?  In this case I'm not talking about
> disk size (moving from 4 X 72GB disks to 4 X 36GB
> disks) but rather moving from 4 X 73GB disks to 3 X
> 73GB disks assuming the pool would fit on 3X. I
> haven't found a way but thought I might be missing
> something. Thanks.

Hi.
You might try the following:

1) Create the new pool ( your 3 X 73GB disks pool ).

2) Create a snapshot of the main filesystem on your 4 disks pool:
zfs snapshot -r [EMAIL PROTECTED]

3) Copy the snapshot to the new 3 disks pool:
zfs sent [EMAIL PROTECTED] | zfs recv newpool/tmp
cd newppol/tmp
mv * ../

4) Replace the old pool with the new one:
zpool destroy poolname ( do a backup before this )
zpool import newpool poolname

This worked for me, but test it before using it on production pools.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2008-06-03 Thread Richard Elling
Nathan Galvin wrote:
> When on leased equipment and previously using VxVM we were able to migrate 
> even a lowly UFS filesystems from one storage array to another storage array 
> via the evacuate process.  I guess this makes us only the 3rd customer 
> waiting for this feature.
>   

UFS cannot be shrunk, so clearly you were not using a "shrink" feature.
For ZFS, you can migrate from one storage device to another using the
"zpool replace" command.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2008-06-03 Thread Nathan Galvin
When on leased equipment and previously using VxVM we were able to migrate even 
a lowly UFS filesystems from one storage array to another storage array via the 
evacuate process.  I guess this makes us only the 3rd customer waiting for this 
feature.

It would be interesting to ask other users of ZFS on leased storage equipment 
the question of what they plan on doing when their lease is up.

I'd bet that those customers would probably answer to something of the effect 
of "Lets migrate our storage to a SAN virtualizer ala an IBM SVC or Hitachi 
USP-like device and never have this problem again."
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2008-05-06 Thread Rich Teer
On Tue, 6 May 2008, Brad Bender wrote:

> Solaris 10 update 5 was released 05/2008, but no zpool shrink :-(  Any update?

IIRC, the ability to shrink a pool isn't even in Nevada yet,
so it'll be *some time* before it'll be in an S10 update...

-- 
Rich Teer, SCSA, SCNA, SCSECA

CEO,
My Online Home Inventory

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2008-05-06 Thread Brad Bender
Solaris 10 update 5 was released 05/2008, but no zpool shrink :-(  Any update?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss