[zfs-discuss] [Blade 150] ZFS: extreme low performance

2006-09-15 Thread Mathias F
Hi forum,

I'm currently a little playing around with ZFS on my workstation.
I created a standard mirrored pool over 2 disk-slices.

# zpool status
 Pool: mypool
 Status: ONLINE
 scrub: Keine erforderlich
config:

NAME  STATE READ WRITE CKSUM
mypoolONLINE   0 0 0
  mirrorONLINE   0 0 0
c0t0d0s4  ONLINE   0 0 0
c0t2d0s4  ONLINE   0 0 0

Then i created a ZFS with no extra options:

# zfs create mypool/zfs01
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
mypool 106K  27,8G  25,5K  /mypool
mypool/zfs01  24,5K  27,8G  24,5K  /mypool/zfs01

When I now send a mkfile on the new FS, the performance of the whole system 
breaks down near zero:

# mkfile 5g test

last pid: 25286;  load avg:  3.54,  2.28,  1.29;   up 0+01:44:26
   16:16:24
66 processes: 61 sleeping, 3 running, 1 zombie, 1 on cpu
CPU states:  0.0% idle,  2.1% user, 97.9% kernel,  0.0% iowait,  0.0% swap
Memory: 512M phys mem, 65M free mem, 2050M swap, 2050M free swap

   PID USERNAME LWP PRI NICE  SIZE   RES STATETIMECPU COMMAND
 25285 root   1   84 1184K  752K run  0:09 66.28% mkfile


It seams that some kind of kernel activity while writing to ZFS blocks the 
system.
Is this a known problem? Do you need additional information?

regards
Mathias
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Mathias F
Well, we are using the -f parameter to test failover functionality.
If one system with mounted ZFS is down, we have to use the force to mount it on 
the failover system.
But when the failed system comes online again, it remounts the ZFS without 
errors, so it is mounted simultanously on both nodes

That's the real problem we have :[

mfg
Mathias

 I think this is user error: the man page explicitly
 says:
 
 -f   Forces import, even if the pool
  appears  to  be
   potentially active.
 y what you did. If the behaviour had been the same
 without 
 the -f option, I guess this would be a bug.
 
 HTH
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Mathias F
Without -f option, the ZFS can't be imported while reserved for the other 
host, even if that host is down.

As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we are 
using atm. So as a result our tests have failed and we have to keep on using 
Veritas.

Thanks for all your answers.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Mathias F
[...]
 a product which is *not* currently multi-host-aware to
 behave in the
 same safe manner as one which is.

That`s the point we figured out while testing it ;)
I just wanted to have our thoughts reviewed by other ZFS users.

Our next steps IF the failover would have succeeded would be to create a little 
ZFS-agent for a VCS testing cluster.
We haven't used Sun Cluster and won't use it in future.

regards
Mathias
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss