Re: [zfs-discuss] pool use from network poor performance

2010-03-25 Thread homerun
Hi

Instaled pci addon network card and disabled nvidia onboarad network.
All started to work as should.
So nge and nv_sata drivers does not work in b134 when shared IRQ is used.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pool use from network poor performance

2010-03-23 Thread homerun
Hi

Well what is changed in system.
replaced 4 sata disks with new and bigger disks.
same time recreated raidz to raidz2
updated OS from b132 to 134

It used to work with old setup.
Has there been some driver changes.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pool use from network poor performance

2010-03-23 Thread homerun
Hi

Here is more specs.

MB:
K8N4-E SE 
- AMD Socket 754 CPU
- NVIDIA® nForce™ 4 4X 
- PCI Express Architecture
- Gigabit LAN
- 4 SATA RAID Ports
- 10 USB2.0 Ports
http://www.asus.com/product.aspx?P_ID=TBx7PakpparxrK89templete=2

Now situation is this :
with ftp : 
i can upload to datapool with speed ~45MB/s
download from datapool only with speed ~ 750 KB/s

So it is now read performance that is a problem.
Could it really be that nvidia network and sata drivers now share same IRQ and 
that's why performance is slow.
Mar 23 19:35:01 hostname unix: [ID 954099 kern.info] NOTICE: IRQ20 is being 
shared by drivers with different interrupt levels.

This is just odd as this issue come when only changed pool physical disks and 
also changed raidz to raidz2 and also update to build 134
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] pool use from network poor performance

2010-03-22 Thread homerun
Hi

i have now two pools

rpool 2-way mirror  ( pata )
data 4-way raidz2   ( sata )

if i access to datapool from network , smb , nfs , ftp , sftp , jne...
i get only max 200 KB/s speeds
compared to rpool that give XX MB/S speeds to and from network it is slow.

Any ideas what reasons might be and how try to find reason.

Locally datapool works reasonable fast for me. 
# date  mkfile 1G testfile  date
Tuesday, March 23, 2010 07:52:19 AM EET
Tuesday, March 23, 2010 07:52:36 AM EET


Some information about system.
# cat /etc/release
   OpenSolaris Development snv_134 X86
   Copyright 2010 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
 Assembled 01 March 2010

# isainfo -v
64-bit amd64 applications
ahf sse3 sse2 sse fxsr amd_3dnowx amd_3dnow amd_mmx mmx cmov amd_sysc
cx8 tsc fpu
32-bit i386 applications
ahf sse3 sse2 sse fxsr amd_3dnowx amd_3dnow amd_mmx mmx cmov amd_sysc
cx8 tsc fpu

thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Q : recommendations for zpool configuration

2010-03-19 Thread homerun
Greetings

I would like to get your recommendation how setup new pool.

I have 4 new 1.5TB disks reserved to new zpool.
I planned to crow/replace existing small 4 disks ( raidz ) setup with new 
bigger one.

As new pool will be bigger and will have more personally important data to be 
stored long time, i like to ask your recommendations should i create recreate 
pool or just replace existing devices.

I have noted there is now raidz2 and been thinking witch woul be better.
A pool with 2 mirrors or one pool with 4 disks raidz2

So at least could some explain these new raidz configurations

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Q : recommendations for zpool configuration

2010-03-19 Thread homerun
Thanks for comments

So possible choises are :

1) 2 2-way mirros
2) 4 disks raidz2

BTW , can raidz have spare ? so is there one posible choise more :
3 disks raidz with 1 spare ?

Here i prefer data availibility not performance.
And if need sometime to expand / change setup it is then that time problem
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Odd zpool / zfs issue

2008-10-05 Thread homerun
Hi

I have one usb hard drive that shows in zpool import as zpool that does not 
exist in disk anymore

# zpool import
  pool: usb1
id: 8159001826765429865
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-72
config:

usb1FAULTED  corrupted data
  c5t0d0p0  ONLINE

I tried some time ago to make mirrored usb zpool but it failed 
And been using this disk now in windows and it is new formatted as ntfs
But every time i boot snv it complains about this disk ...

How to get rid of this  so wipe out that zpool that does not exist
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Q : relayout disks

2008-08-25 Thread homerun
Hi

I have planned to relayout current mirrored boot disk configuration which comes 
from days no zfs boot.
History - I just converted old ufs boot slices to zfs boot , those where 
mirrored using solaris volume manager to zfs boot...

Current layout is :
2 disks as c0d0 and c1d0
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c0d0s0 ONLINE 0 0 0
c1d0s0 ONLINE 0 0 0

NAME STATE READ WRITE CKSUM
localpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c0d0s5 ONLINE 0 0 0
c1d0s5 ONLINE 0 0 0

localpool is now emptied so no matter even delete it , but how resize rpool to 
be fully over disks c0d0 and c1d0 as mirror.

I know replace mirror device one-by-one will do the job in zpool level but how 
about zfs boot??
is reinstall only way ??? Or can it handle this as live change

So like to get new setup :
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c0d0 ONLINE 0 0 0
c1d0 ONLINE 0 0 0

Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Q : change disks to get bigger pool

2008-01-20 Thread homerun
Hi

I have raidz pool

raidz1  ONLINE   0 0 0
c2d0ONLINE   0 0 0
c3d0ONLINE   0 0 0
c4d0ONLINE   0 0 0
c5d0ONLINE   0 0 0

That is now getting full.
Plan is to replace disks with new and larger disks.

So will pool get bigger just by replasing all 4 disks one-by-one ?
And if it will get larger how this should be done , fail disks one-by-one .. or 
???

Or is data backup and pool recreation only way to get bigger pool

Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] DBMS on zpool

2007-05-18 Thread homerun
Hi

Just playing around with zfs , trying to locate DBMS data files to zpool.
DBMS i mean here are oracle and informix.
currently noticed that read operations perfomance is excelent but all write 
operations are not and also write operations performance variates a lot.
My quess for not so good write performance and write performance variation is 
double buffering , DBMS buffers and zfs caching. together.
Have anyone seen or tested best practices how should DBMS setup be 
implemented using zpool ; zfs or zvol.

Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Convert raidz

2007-04-02 Thread homerun
Hi

Is it possible to convert live 3 disks zpool from raidz to raidz2
And is it possible to add 1 new disk to raidz configuration without backups and 
recreating zpool from cratch.

Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Tunable parameter to zfs memory use

2006-08-16 Thread homerun
Hi

been using now zfs since 06/06 u2 release has been out.
one thing have notised.
zfs eats a lot of memory.
right after boot mem usage is about 280M
but after accessing zfs disks usage rises fast to be 900M.
and it seems to stay in level of 90% of tot mem.
also noted it frees used mem but running heavy apps you see performace impact 
while zfs frees memory to other use.

now idea:
could there be /etc/system tunable parameter for zfs so we could manually
set % of total memory zfs can use.

like :
zfs_max_phys_mem 30%
would tell zfs that it can use 30% of available memory.
and via versa
zfs_min_phys_mem 10%
would tell zfs that it can allways use 10% of available memory.
-- these maybe need to global values for all zfs filesystems 

or just simple zfs option that can be set as other zfs options , like 
compression, etc... then this setting could be easily per zfs filesystem

Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss