[zfs-discuss] state of zfs pool shrinking

2009-05-14 Thread Thomas Wagner

Just wanted to ask how we make progress with zpool shrinking?

Are there any prerequisite projects we are waiting on?

e.g. tracked by CR 4852783 reduce pool capacity

Thomas

--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Thomas Wagner
  pool-shrinking (and an option to shrink disk A when i want disk B to
  become a mirror, but A is a few blocks bigger)
  This may be interesting... I'm not sure how often you need to shrink a pool 
  though?  Could this be classified more as a Home or SME level feature?

Enterprise level especially in SAN environments need this.

Projects own theyr own pools and constantly grow and *shrink* space.
And they have no downtime available for that.

give a +1 if you agree

Thomas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Thomas Wagner
I would really add : make insane zfs destroy -r| poolname  as harmless as 
zpool destroy poolname (recoverable)

  zfs destroy -r| poolname|/filesystem

  this should behave like that:

  o snapshot the filesystem to be deleted (each, name it 
@deletedby_operatorname_date)

  o hide the snapshot as long as the pool has enough space and
 property snapshotbeforedelete=on (default off) is set 'on'

  o free space by removing those snapshots no earlier then configured
in a inheritable pool/filesystem property snapshotbeforedeleteremoval=3days
(=0 preserve forever, 30min preserve for 30 minutes, ...)


  o prevent deletion of a pool or filesystem if at least one 
snapshot from the above save actions exists down the tree

  o purging of snapshots would be done by 


To be honest, I don't want a discussion like the rm -rf is one.
In front of the keyboard or inside scripts we are all humans with
all theyr mistakes. In opposite to the rm -rf, the ZFS Design 
should take this extension without major changes. It should be 
a generic rule of dump to implement safety if it is possible 
at resonable low cost.

I think the full range of users, Enterprise to Home will appreciate
that theyr multi-million-$$-business/home_data does not go down 
accidentially with the interactive=on (Bryan) or the the idea 
written here. This in case someone makes an error and all the 
data could still be there (!)...ZFS should protect the user as well
and not only look at the hardware redundancy.

Thomas

PS: think of the day where simple operator $NAME makes a typo
zfs destroy -r poolname and all the data still sits on the
disk. But no one is able to bring that valueable data back,
except restoration from tape with hours of downtime.
Sorry for repeating that, it hurts so much to not having
this feature.

On Sat, Feb 28, 2009 at 04:35:05AM -0500, Bryan Allen wrote:
 I for one would like an interactive attribute for zpools and
 filesystems, specifically for destroy.
 
 The existing behavior (no prompt) could be the default, but all
 filesystems would inherit from the zpool's attrib. so I'd only
 need to set interactive=on for the pool itself, not for each
 filesystem.
 
 I have yet (in almost two years of using ZFS) to bone myself by
 accidentally destroying tank/worthmorethanyourjob, but it's only
 a matter of time, regardless of how careful I am.
 
 The argument rm vs zfs destroy doesn't hold much water to me. I
 don't use rm -i, but destroying a single file or a hierarchy of
 directories is somewhat different than destroying a filesytem or
 entire pool. At least to my mind.
 
 As such, consider it a piece of mind feature.
 -- 
 bda
 Cyberpunk is dead.  Long live cyberpunk.
 http://mirrorshades.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

-- 
Thomas Wagner
+49-171-6135989  http://www.wagner-net.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best option for my home file server?

2007-09-28 Thread Thomas Wagner
sliceing say S0 to be used as root-filesystem would make
ZFS not using the write-buffer on the disks.
This would be a slight performance degrade, but would increate
reliability of the system (since root is mirrored).

Why not living on the edge and booting from ZFS ?
This would nearly eliminate UFS.

Use e.g. the two 500GB Disks for the root-filesystem
on a mirrored pool:

   mirror  X Z   here lives the OS with it's root-Filesystem on ZFS
 *and* userdata in the same pool

   raidz A B C D or any other layout

or
 User zwo of the 250GB ones:

pool boot-and-userdata-one
   mirror A B   here lives the OS and userdata-one

pool userdata-two
   mirror C D   userdata-two spanning CD - XY
   mirror X Y   

Thomas


On Thu, Sep 27, 2007 at 08:39:40PM +0100, Dick Davies wrote:
 On 26/09/2007, Christopher [EMAIL PROTECTED] wrote:
  I'm about to build a fileserver and I think I'm gonna use OpenSolaris and 
  ZFS.
 
  I've got a 40GB PATA disk which will be the OS disk,
 
 Would be nice to remove that as a SPOF.
 
 I know ZFS likes whole disks, but I wonder how much would performance suffer
 if you SVMed up the first few Gb of a ZFS mirror pair for your root fs?
 I did it this week on Solaris 10 and it seemed to work pretty well
 
 (
 http://number9.hellooperator.net/articles/2007/09/27/solaris-10-on-mirrored-disks
 )
 
 Roll on ZFS root :)
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raid is very slow???

2007-07-17 Thread Thomas Wagner
Orvar,

around 50 to 60 MB/sec I've seen when zwo disks are writing
and around 100MB/s when reading round-robin.

The limiting faktor has been the old PCI-Bus (*not* 32-Bit
slot length) and in another test the 1-lane PCI-X bus.
(Sil680/SIL3124-2  and  SIL3132 Chip)
 
So if you can see the difference being faktor 2 between reading
and writing when using a 1:1 mirror setup, I would say, you 
hit the bottleneck of your PCI-Bus.

Thomas

On Sun, Jul 15, 2007 at 03:37:06AM -0700, Orvar Korvar wrote:
 I did that, and here are the results from the ZFS jury:
 
 bash-3.00$ timex dd if=/dev/zero of=file bs=128k count=8192
 8192+0 records in
 8192+0 records out
 
 real  19.40
 user   0.01
 sys1.54
 
 
 
 That is, 1GB created on 20sec = 50MB/sec. That is better, but still not good, 
 as each drive of the four drives are capable of 50MB/sec. However, I can not 
 achieve 50MB/sec in normal use. Strange.
 
 I will presume that the numbers get better when I upgrade to 64bit.
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

-- 
Mit freundlichen Gruessen,

Thomas Wagner

--

*
Thomas WagnerTel:+49-(0)-711-720 98-131
Strategic Support Engineer   Fax:+49-(0)-711-720 98-443
Global Customer Services Cell:   +49-(0)-175-292 60 64
Sun Microsystems GmbHE-Mail: [EMAIL PROTECTED]
Zettachring 10A, D-70567 Stuttgart   http://www.sun.de

Sitz der Gesellschaft: Sun Microsystems GmbH, Sonnenallee 1, D-85551 
Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mkdir == zfs create

2006-09-28 Thread Thomas Wagner
  (And if you don't need it to work remotely automount could take care of it
  if you think cd should be sufficient reason to create a directory)
 
 Maybe on unmount empty filesystems could be destroyed.
 

more general?


If we have events like a library-call to mkdir or change-dir
or no open filedescriptors or ünmount, then operator-predefined
actions will be triggered.

Actions like zfs create rulebased-name, take a snapshot or
zsend on a snapshot  and others could be thought of.


Thomas

--

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Thomas Wagner
 
On Wed, Sep 13, 2006 at 12:28:23PM +0200, Michael Schuster wrote:
 Mathias F wrote:
 Well, we are using the -f parameter to test failover functionality.
 If one system with mounted ZFS is down, we have to use the force to mount 
 it on the failover system.
 But when the failed system comes online again, it remounts the ZFS without 
 errors, so it is mounted simultanously on both nodes

This is used on a regularly basis within cluster frameworks...

 ZFS currently doesn't support this, I'm sorry to say. *You* have to make 
 sure that a zpool is not imported on more than one node at a time.

Why not using a real cluster-software as *You*, taking care of using
resources like a filesystem (ufs, zfs, others...) in a consistent way?

I think ZFS does enough to make shure not accidentially using 
filesystems/pools from more then one hosts at a time. If you 
want more, please consider using a cluster-framework with heartbeats 
and all that great stuff ...


Regards,
Thomas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ?: ZFS and jumpstart export race condition

2006-09-08 Thread Thomas Wagner
Steffen,

I have the same with my home-installserver. As a dirty solution I
set mount-at-boot to no for the lofs Filesystems, to get the system up.
But with every new OS added by JET the mount at reboot reappears.

Seems to me as the question when should a lofs filesystem be mounted at boot.
When does a zfs filesystem get mounted?
Probably a zfs legacy mount together with a lower priority lofs mount
would do it.

Regards,
Thomas

On Fri, Sep 08, 2006 at 08:18:06AM -0400, Steffen Weiberle wrote:
 I have a jumpstart server where the install images are on a ZFS pool. 
 For PXE boot, several lofs mounts are created and configured in 
 /etc/vfstab. My system does not boot properly anymore because the 
 mounts referring to jumstart files haven't been mounted yet via ZFS.
 
 What is the best way of working around this? Can I just create the 
 necessary mounts of pool1/jumpstart in /etc/vfstab, or is ZFS just not 
 running yet when these mounts get attempted?
 
 A lot of network services, including ssh, are not running because 
 fs-local did not come up clean.
 
 Is this a know problem that is being addressed? This is S10 6/06.
 
 Thanks
 Steffen
 
 
 # cat /etc/vfstab
 ...
 /export/jumpstart/s10/x86/boot - /tftpboot/I86PC.Solaris_10-1 lofs - 
 yes ro
 /export/jumpstart/nv/x86/latest/boot - /tftpboot/I86PC.Solaris_11-1 
 lofs - yes ro
 /export/jumpstart/s10u3/x86/latest/boot - /tftpboot/I86PC.Solaris_10-2 
 lofs - yes ro
 
 
 
 # zfs get all pool1/jumpstart
 NAME PROPERTY   VALUE  SOURCE
 pool1/jumpstart  type   filesystem -
 pool1/jumpstart  creation   Mon Jun 12  8:26 2006  -
 pool1/jumpstart  used   39.9G  -
 pool1/jumpstart  available  17.7G  -
 pool1/jumpstart  referenced 39.9G  -
 pool1/jumpstart  compressratio  1.00x  -
 pool1/jumpstart  mountedyes-
 pool1/jumpstart  quota  none   default
 pool1/jumpstart  reservationnone   default
 pool1/jumpstart  recordsize 128K   default
 pool1/jumpstart  mountpoint /export/jumpstart  local
 pool1/jumpstart  sharenfs   ro,anon=0  local
 pool1/jumpstart  checksum   on default
 pool1/jumpstart  compressionoffdefault
 pool1/jumpstart  atime  on default
 pool1/jumpstart  deviceson default
 pool1/jumpstart  exec   on default
 pool1/jumpstart  setuid on default
 pool1/jumpstart  readonly   offdefault
 pool1/jumpstart  zoned  offdefault
 pool1/jumpstart  snapdirhidden default
 pool1/jumpstart  aclmodegroupmask  default
 pool1/jumpstart  aclinherit secure default
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

-- 
Mit freundlichen Gruessen,

Thomas Wagner

--

*
Thomas WagnerTel:+49-(0)-711-720 98-131
Strategic Support Engineer   Fax:+49-(0)-711-720 98-443
Global Customer Services Cell:   +49-(0)-175-292 60 64
Sun Microsystems GmbHE-Mail: [EMAIL PROTECTED]
Zettachring 10A, D-70567 Stuttgart   http://www.sun.de
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss