Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40

2009-08-01 Thread Scott Lawson



Dave Stubbs wrote:

I don't mean to be offensive Russel, but if you do
ever return to ZFS, please promise me that you will
never, ever, EVER run it virtualized on top of NTFS
(a.k.a. worst file system ever) in a production
environment. Microsoft Windows is a horribly
unreliable operating system in situations where
things like protecting against data corruption are
important. Microsoft knows this



Oh WOW!  Whether or not our friend Russel virtualized on top of NTFS (he didn't - he used raw disk access) this point is amazing!  System5 - based on this thread I'd say you can't really make this claim at all.  Solaris suffered a crash and the ZFS filesystem lost EVERYTHING!  And there aren't even any recovery tools?  


HANG YOUR HEADS!!!

Recovery from the same situation is EASY on NTFS.  There are piles of tools out 
there that will recover the file system, and failing that, locate and extract 
data.  The key parts of the file system are stored in multiple locations on the 
d
You mean the data that you don't know you have lost yet? ZFS allows you 
to be very paranoid about data protection with things like copies=2,3,4 
etc etc..
isk just in case.  It's been this way for over 10 years.  I'd say it seems from this thread that my data is a lot safer on NTFS than it is on ZFS!  
  
I can't believe my eyes as I read all these responses blaming system engineering and hiding behind ECC memory excuses and well, you know, ZFS is intended for more Professional systems and not consumer devices, etc etc.  My goodness!  You DO realize that Sun has this website called opensolaris.org which actually proposes to have people use ZFS on commodity hardware, don't you?  I don't see a huge warning on that site saying ATTENTION:  YOU PROBABLY WILL LOSE ALL YOUR DATA.  


I recently flirted with putting several large Unified Storage 7000 systems on 
our corporate network.  The hype about ZFS is quite compelling and I had 
positive experience in my lab setting.  But because of not having Solaris 
capability on our staff we went in another direction instead.
  
You do realize that the 7000 series machines are appliances and have no 
prerequisite for you to have any Solaris knowledge whatsoever? They are 
a supported
device just like any other disk storage system that you can purchase 
from any vendor and have it supported as such. To use it all you need is 
a web browser. Thats it.
This is no different than your EMC array or HP Storageworks hardware, 
except that the under pinnings of the storage system are there for all 
to see in the form

of open source code contributed to the community by Sun.
Reading this thread, I'm SO glad we didn't put ZFS in production in ANY way.  Guys, this is the real world.  Stuff happens.  It doesn't matter what the reason is - hardware lying about cache commits, out-of-order commits, failure to use ECC memory, whatever.  It is ABSOLUTELY unacceptable for the filesystem to be entirely lost.  No excuse or rationalization of any type can be justified.  There MUST be at least the base suite of tools to deal with this stuff.  without it, ZFS simply isn't ready yet.  
  
Sounds like you have no real world experience of ZFS in production 
environments and it's true reliability. As many people here report there 
are thousands if not millions
of zpools out there containing business critical environments that are 
happily fixing broken hardware on a daily basis. I have personally seen 
all sorts of pieces of hardware
break and ZFS corrected and fixed things for me.  I personally manage 50 
plus ZFS zpools that are anywhere from 100GB to 30 TB. Works very, very, 
very well for me.
I have never lost anything despite having had plenty of pieces of 
hardware break in some form underneath ZFS.

I am saving a copy of this thread to show my colleagues and also those Sun 
Microsystems sales people that keep calling.
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] I Still Have My Data

2009-08-01 Thread Ross
Same here, I've got a test server at work running 15x 500GB SATA disks on a 
pair of AOC-SAT2-MV8 cards, it suffered some 20 minutes of slow response when a 
disk started to fail, but although that caused a few problems with the clients, 
the data is still there.

However, my home system has been superb.  That's 6x 1TB SATA disks on an 
AOC-SAT2-MV8, it's suffered multiple power cuts (8 or more), a dead disk, and 
has been upgraded to many of the bi-weekly OpenSolaris builds.  It's never gone 
down, and has been serving data to a mix of Linux, Windows and Xbox clients 
without a single hiccup.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] I Still Have My Data

2009-08-01 Thread Thomas Burgess
I've been running ZFS on FreeBSD and i've had no problems.  ZFS is still
considered experimental in FreeBSD but it's working wonderfully.  I have 3
raidz1 vdevs with 4 1tb drives each and i've had several power outages and
i've yanked out disks just to see what would happenit's been fine.  I
used FreeBSD because there was no support for my raidcard in opensolaris
yet.  I'm using zfs on 2 other opensolaris computers but with much smaller
pools...the desktop i'm writing this on has a simple mirror of 2 250 gb
disks and a zfs laptop i have has a single no-redundancy setup.

All around i'm very impressed and satisfied with ZFS

On Sat, Aug 1, 2009 at 4:03 AM, Ross no-re...@opensolaris.org wrote:

 Same here, I've got a test server at work running 15x 500GB SATA disks on a
 pair of AOC-SAT2-MV8 cards, it suffered some 20 minutes of slow response
 when a disk started to fail, but although that caused a few problems with
 the clients, the data is still there.

 However, my home system has been superb.  That's 6x 1TB SATA disks on an
 AOC-SAT2-MV8, it's suffered multiple power cuts (8 or more), a dead disk,
 and has been upgraded to many of the bi-weekly OpenSolaris builds.  It's
 never gone down, and has been serving data to a mix of Linux, Windows and
 Xbox clients without a single hiccup.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40

2009-08-01 Thread Brian
 On Fri, 31 Jul 2009, Brian wrote:
 
  I must say this thread has also damaged the view I
 have of ZFS. 
  Ive been considering just getting a Raid 5
 controller and going the 
  linux route I had planned on.
 
 Thankfully, the zfs users who have never lost a pool
 do not spend much 
 time posting about their excitement at never losing a
 pool. 
 Otherwise this list would be even more overwelming.
 
 I have not yet lost a pool, and this includes the one
 built on USB 
 drives which might be ignoring cache sync requests.
 
 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us,
 http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,
http://www.GraphicsMagick.org/
 
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss


Yes you are right, I spoke irrationally.  I still intend to try it out at least 
for a period of time to see what I think.  ill put it through the standard 
tests and such.  However I am having trouble getting my motherboard to 
recognize 4 of the hard drives I picked ( I made a post about it in the storage 
forum).  Once thats finished ill get this testing underway
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-08-01 Thread Victor Latushkin

On 25.07.09 00:30, Rob Logan wrote:

  The post I read said OpenSolaris guest crashed, and the guy clicked
  the ``power off guest'' button on the virtual machine.

I seem to recall guest hung. 99% of solaris hangs (without
a crash dump) are hardware in nature. (my experience backed by
an uptime of 1116days) so the finger is still
pointed at VirtualBox's hardware implementation.

as for ZFS requiring better hardware, you could turn
off checksums and other protections so one isn't notified
of issues making it act like the others.


You cannot turn off checksums and copies for metadata though, so even if you 
don't care about your data ZFS still cares about its metadata.


victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Install and boot from USB stick?

2009-08-01 Thread tore
Nah, that didnt seem to do the trick.

Also tried this 
http://blogs.sun.com/thaniwa/entry/en_opensolaris_installation_into_usb

But that either didnt seem to work. After unmounting and rebooting, i get the 
same error msg from my previous post.

Dont know if there is much more to do... Suggestions?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] crossmnt ?

2009-08-01 Thread Cyril Plisko
On Fri, Jul 31, 2009 at 12:46 AM, rolandno-re...@opensolaris.org wrote:
 Hello !

 How can i export a filesystem /export1 so that sub-filesystems within that 
 filesystems will be available and usable on the client side without 
 additional mount/share effort ?

 this is possible with linux nfsd and i wonder how this can be done with 
 solaris nfs.

 i`d like to use /export1 as datastore for ESX and create zfs sub-filesystems 
 for each VM in that datastore, for better snapshot handling.


If you do zfs set sharenfs=on yourpool/yourfilesystem/export, then
all the file systems created under yourpool/yourfilesystem/export will
inherit this (sharenfs) property and will be shared automagically as
they are created. Try to create a couple of such filesystems and then
run zfs get sharenfs - you'll see what I mean.


-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Install and boot from USB stick?

2009-08-01 Thread Jürgen Keil
 Nah, that didnt seem to do the trick.
 
  After unmounting
 and rebooting, i get the same error msg from my
 previous post.

Did you get these scsi error messages during installation
to the usb stick, too?

Another thing that confuses me:  the unit attention /
medium may have changed message is using
error level: retryable.  I think the sd disk driver
is supposed to just retry the read or write operation.
The message seems more like a warning message,
not a fatal error.
Are there any message with Error level: fatal ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lundman home NAS

2009-08-01 Thread Jorgen Lundman

Some preliminary speed tests, not too bad for a pci32 card.

http://lundman.net/wiki/index.php/Lraid5_iozone


Jorgen Lundman wrote:


Finding a SATA card that would work with Solaris, and be hot-swap, and 
more than 4 ports, sure took a while. Oh and be reasonably priced ;) 
Double the price of the dual core Atom did not seem right.


The SATA card was a close fit to the jumper were the power-switch cable 
attaches, as you can see in one of the photos. This is because the MV8 
card is quite long, and has the big plastic SATA sockets. It does fit, 
but it was the tightest spot.


I also picked the 5-in-3 drive cage that had the shortest depth 
listed, 190mm. For example the Supermicro M35T is 245mm, another 5cm. 
Not sure that would fit.


Lund


Nathan Fiedler wrote:

Yes, please write more about this. The photos are terrific and I
appreciate the many useful observations you've made. For my home NAS I
chose the Chenbro ES34069 and the biggest problem was finding a
SATA/PCI card that would work with OpenSolaris and fit in the case
(technically impossible without a ribbon cable PCI adapter). After
seeing this, I may reconsider my choice.

For the SATA card, you mentioned that it was a close fit with the case
power switch. Would removing the backplane on the card have helped?

Thanks

n


On Fri, Jul 31, 2009 at 5:22 AM, Jorgen Lundmanlund...@gmo.jp wrote:

I have assembled my home RAID finally, and I think it looks rather good.

http://www.lundman.net/gallery/v/lraid5/p1150547.jpg.html

Feedback is welcome.

I have yet to do proper speed tests, I will do so in the coming week 
should

people be interested.

Even though I have tried to use only existing, and cheap, parts the 
end sum
became higher than I expected. Final price is somewhere in the 47,000 
yen

range. (Without hard disks)

If I were to make and sell these, they would be 57,000 or so, so I do 
not
really know if anyone would be interested. Especially since SOHO NAS 
devices

seem to start around 80,000.

Anyway, sure has been fun.

Lund

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





--
Jorgen Lundman   | lund...@lundman.net
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs: how is size of Volume computed?

2009-08-01 Thread Andrew . Rutz

hi,

 i'm using a zvol someone else created (and then used as
an iSCSI Target, via: iscsitadm ... -b /dev/zvol ...).

I see that AVAIL has a size of 33GB, yet the VOLSIZE is 24GB ;

# zfs list -t volume -o name,avail,used,volsize iscsi-pool/log_1_1
NAMEAVAIL   USED  VOLSIZE
iscsi-pool/log_1_1  33.7G  24.4G24.4G


I debugged 'format', and it received a size of 24GB from Sun's
iSCSI Target implementation.  'format' did a READ-CAPACITY (scsi)
cmd and was return 24GB.

why does 33GB show as AVAIL ?  should i be expecting 33GB's worth
of usable disk space? ...or does AVAIL (for some weird reason also)
include metadata) ??

# zdb -v iscsi-pool/log_1_1
Dataset iscsi-pool/log_1_1 [ZVOL], ID 117, cr_txg 74, 54.0K, 3 objects

Object  lvl   iblk   dblk  lsize  asize  type
 0716K16K16K  14.0K  DMU dnode
 1416K 8K  24.4G  38.0K  zvol object  
 2116K512512 1K  zvol prop


thanks
/andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool causing boot to hang

2009-08-01 Thread Mark Johnson


I was wondering if this is a known problem..

I am running stock b118 bits. System has a UFS root
and a single zpool (with multiple nfs, smb, and iscsi
exports)

Powered off my machine last night..  Powered it on this
morning and it hung during boot.  It hung when reading the
zpool disks.. It would read them for a while, stop
reading and hang. I let it sit for over 4 hours... Tried
multiple power cycles, etc.

I was able to power off the disks, and boot the machine..
I exported the zpool, powered on the disks, and rebooted.
The machine booted and I tried to import the zpool.
It did import after about 5 minutes (which seemed a lot
longer than it had been in the past), I saw the following
processes running during this time..

root   820   368   0 15:14:37 ?   0:00 zfsdle 
/devices/p...@0,0/pci1043,8...@8/d...@1,0:a
root   818   368   0 15:14:37 ?   0:00 zfsdle 
/devices/p...@0,0/pci1043,8...@7/d...@0,0:a
root   819   368   0 15:14:37 ?   0:00 zfsdle 
/devices/p...@0,0/pci1043,8...@8/d...@0,0:a

One thing that could be related is that I was running
a scrub when I had powered off the system. The scrub
started up again after I had imported the pool.

Anyone know if this is a known problem?


Thanks,

MRJ



-bash-3.2# zpool status
  pool: tank
 state: ONLINE
 scrub: scrub in progress for 0h12m, 3.25% done, 6h16m to go
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t0d0  ONLINE   0 0 0
c2t0d0  ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0

errors: No known data errors
-bash-3.2#


-bash-3.2# zdb -C
tank
version=16
name='tank'
state=0
txg=2866038
pool_guid=690654980843352264
hostid=786700041
hostname='asus-a8n'
vdev_tree
type='root'
id=0
guid=690654980843352264
children[0]
type='raidz'
id=0
guid=9034903530721214825
nparity=1
metaslab_array=14
metaslab_shift=33
ashift=9
asize=960171343872
is_log=0
children[0]
type='disk'
id=0
guid=17813126553843208646
path='/dev/dsk/c1t0d0s0'
devid='id1,s...@ast3320620as=5qf3ysjj/a'
phys_path='/p...@0,0/pci1043,8...@8/d...@0,0:a'
whole_disk=1
DTL=32
children[1]
type='disk'
id=1
guid=6761028837288241506
path='/dev/dsk/c2t0d0s0'
devid='id1,s...@ast3320620as=5qf3yqxb/a'
phys_path='/p...@0,0/pci1043,8...@7/d...@0,0:a'
whole_disk=1
DTL=31
children[2]
type='disk'
id=2
guid=15791031942666816527
path='/dev/dsk/c1t1d0s0'
devid='id1,s...@ast3320620as=5qf3ys51/a'
phys_path='/p...@0,0/pci1043,8...@8/d...@1,0:a'
whole_disk=1
DTL=30
-bash-3.2#


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lundman home NAS

2009-08-01 Thread Louis-Frédéric Feuillette
On Sat, 2009-08-01 at 22:31 +0900, Jorgen Lundman wrote:
 Some preliminary speed tests, not too bad for a pci32 card.
 
 http://lundman.net/wiki/index.php/Lraid5_iozone

I don't know anything about iozone, so the following may be NULL 
void.

I find the results suspect.  1.2GB/s read, and 500MB/s write ! These are
impressive numbers indeed.  I then looked at the file sizes that iozone
used...  How much memory do you have?  I seems like the files would be
able to comfortably fit in memory.  I think this test needs to be re-run
with Large files (ie 2*Memory size ) for them to give more accurate
data.

Unrelated, what did you use to generate those graphs, they look good.
Also, do you have a hardware list on your site somewhere that I missed?
I'd like to know more about the hardware.

-- 
Louis-Frédéric Feuillette jeb...@gmail.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lundman home NAS

2009-08-01 Thread Bob Friesenhahn

On Sat, 1 Aug 2009, Louis-Frédéric Feuillette wrote:


I find the results suspect.  1.2GB/s read, and 500MB/s write ! These are
impressive numbers indeed.  I then looked at the file sizes that iozone
used...  How much memory do you have?  I seems like the files would be
able to comfortably fit in memory.  I think this test needs to be re-run
with Large files (ie 2*Memory size ) for them to give more accurate
data.


The numbers are indeed suspect but the iozone sweep test is quite 
useful in order to see the influence of zfs's caching via the ARC. 
The sweep should definitely be run to at least 2X the memory size.



Unrelated, what did you use to generate those graphs, they look good.


Iozone output may be plotted via gnuplot or Microsoft Excel.  This 
looks like the gnuplot output.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool causing boot to hang

2009-08-01 Thread dick hoogendijk
On Fri, 31 Jul 2009 15:43:11 -0400
Mark Johnson mark.john...@sun.com wrote:

 One thing that could be related is that I was running
 a scrub when I had powered off the system. The scrub
 started up again after I had imported the pool.
 
 Anyone know if this is a known problem?

I knwo people running a scrub often have problems after shutting down
during the scrub. I have learned to HALT the scrub before going
offline.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-08-01 Thread Joerg Moellenkamp

Hi Jorgen,

warning ... weird idea inside ...
Ah it just occurred to me that perhaps for our specific problem, we 
will buy two X25-Es and replace the root mirror. The OS and ZIL logs 
can live  together and put /var in the data pool. That way we would 
not need to rebuild the data-pool and all the work that comes with that.


Shame I can't zpool replace to a smaller disk (500GB HDD to 32GB SSD) 
though, I will have to lucreate and reboot one time.


Oh, you have a solution ... just had an weird idea and thought about 
suggesting you something of a hack: Putting SSD in a central server, 
build a pool out of them, perhaps activate compression (at the end small 
machines are today 4 core systems, they shouldn't idle for their money), 
create some zvols out of them, share them via iSCSI and assign them as 
slog devices. For high speed usage: Create a ramdisk, use it as slog on 
the ssd server, put a UPS under the ssd server. At the end a SSD drive 
is nothing else (a flash memory controller, with dram, some storage and 
some caps to keep the dram powered until the dram is flushed)


Regards
Joerg


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] article on btrfs, comparison with zfs

2009-08-01 Thread Henk Langeveld

Mario Goebbels wrote:

An introduction to btrfs, from somebody who used to work on ZFS:

http://www.osnews.com/story/21920/A_Short_History_of_btrfs

*very* interesting article.. Not sure why James didn't directly link to
it, but courteous of Valerie Aurora (formerly Henson)

http://lwn.net/Articles/342892/


I'm trying to understand the argument against the SLAB allocator 
approach. If I understood correctly how BTRFS allocates space, changing 
and deleting files may just punch randomly sized holes into the disk 
layout. How's that better?


It's an interesting article, for sure.  The core of the article is actually
how a solution (b+trees with copy-on-write) found a problem (file systems).

To answer the question, the article claims that reallocation is part of the
normal process writing data:

 Defragmentation is an ongoing process - repacking the items
 efficiently is part of the normal code path preparing extents to be
 written to disk. Doing checksums, reference counting, and other
 assorted metadata busy-work on a per-extent basis reduces overhead
 and makes new features (such as fast reverse mapping from an extent
 to everything that references it) possible.

It sure suggests what is happening, but I haven't got a clue on how the above
makes a difference.  Translating this to the ZFS design, I guess it involves
delaying the block layout to the actual txg i/o phase, while zfs already 
decides this when a block enters the txg, it's layout has been decided already.


This allows for blocks to be dumped into a slog device as soon as it is 
available.

Cheers,
Henk
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Install and boot from USB stick?

2009-08-01 Thread Jürgen Keil
  Are there any message with Error level: fatal ? 
 
 Not that I know of, however, i can check. But im
 unable to find out what to change in grub to get
 verbose output rather than just the splashimage.

Edit the grub commands, delete all splashimage, 
foreground and background lines, and delete the
console=graphics option from the kernel$ line.

To enable verbose kernel message, append kernel
boot option  -v at the end of the kernel$ boot
command line.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lundman home NAS

2009-08-01 Thread Jorgen Lundman
Ok I have redone the initial tests as 4G instead. Graphs are on the same 
place.


http://lundman.net/wiki/index.php/Lraid5_iozone

I also mounted it with nfsv3 and mounted it for more iozone. Alas, I 
started with 100mbit, so it has taken quite a while. It is constantly at 
11MB/s though. ;)




Jorgen Lundman wrote:
I was following Toms Hardware on how they test NAS units. I have 2GB 
memory, so I will re-run the test at 4, if I figure out which option 
that is.


I used Excel for the graphs in this case, gnuplot did not want to work. 
(Nor did Excel mind you)



Bob Friesenhahn wrote:

On Sat, 1 Aug 2009, Louis-Frédéric Feuillette wrote:


I find the results suspect.  1.2GB/s read, and 500MB/s write ! These are
impressive numbers indeed.  I then looked at the file sizes that iozone
used...  How much memory do you have?  I seems like the files would be
able to comfortably fit in memory.  I think this test needs to be re-run
with Large files (ie 2*Memory size ) for them to give more accurate
data.


The numbers are indeed suspect but the iozone sweep test is quite 
useful in order to see the influence of zfs's caching via the ARC. The 
sweep should definitely be run to at least 2X the memory size.



Unrelated, what did you use to generate those graphs, they look good.


Iozone output may be plotted via gnuplot or Microsoft Excel.  This 
looks like the gnuplot output.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, 
http://www.simplesystems.org/users/bfriesen/

GraphicsMagick Maintainer,http://www.GraphicsMagick.org/




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
Jorgen Lundman   | lund...@lundman.net
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss