Do you have any recommend parameters should I try ?

Ellis, Mike wrote:
Would adding a dedicated ZIL/SLOG (what is the difference between those 2 exactly? Is there one?) help meet your requirement?

The idea would be to use some sort of relatively large SSD drive of some variety to absorb the initial write-hit. After hours when things quieit down (or perhaps during "slow periods" in the day) data is transparently destaged into the main disk-pool, providing you a transparent/rudimentary form of HSM. 

Have a look at Adam Leventhal's blog and ACM article for some interesting perspectives on this stuff... (Specifically the potential "return of the 3600 rpm drive" ;-)

Thanks -- mikee
  


Actually, we do not need this data at the end of the day.

We will write summary into Oracle DB.

SSD is good options, but cost is not feasible for some client.

Is Sun providing SSD arrays ??

----- Original Message -----
From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
To: Tharindu Rukshan Bamunuarachchi <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org <zfs-discuss@opensolaris.org>
Sent: Wed Jul 23 11:22:51 2008
Subject: Re: [zfs-discuss] [zfs-code] Peak every 4-5 second

On Wed, 23 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:

  
10,000 x 700 = 7MB per second ......

We have this rate for whole day ....

10,000 orders per second is minimum requirments of modern day stock exchanges ...

Cache still help us for ~1 hours, but after that who will help us ...

We are using 2540 for current testing ...
I have tried same with 6140, but no significant improvement ... only one or two hours ...
    

Does your application request synchronous file writes or use fsync()? 
While normally fsync() slows performance I think that it will also 
serve to even the write response since ZFS will not be buffering lots 
of unwritten data.  However, there may be buffered writes from other 
applications which gets written periodically and which may delay the 
writes from your critical application.  In this case reducing the ARC 
size may help so that the ZFS sync takes less time.

You could also run a script which executes 'sync' every second or two 
in order to convince ZFS to cache less unwritten data. This will cause 
a bit of a performance hit for the whole system though.
  
This did not work and i got much higher peak , once a while.

Other than Array mounted disk, our applications are writing to local hard disks (e.g. logs )

AFAIK, "sync" is applicable to all file systems.
You 7MB per second is a very tiny write load so it is worthwhile 
investigating to see if there are other factors which are causing your 
storage system to not perform correctly.  The 2540 is capable of 
supporting writes at hundreds of MB per second.
  

Yes. 2540 can go up to 40MB/s or more with more striped hard disks.

But we are struggling with latency not bandwidth. I/O bandwidth is superb. But poor latency.

As an example of "another factor", let's say that you used the 2540 to 
create 6 small LUNs and then put them into a ZFS zraid.  However, in 
this case the 2540 allocated all of the LUNs from the same disk (which 
it is happy to do by default) so now that disk is being severely 
thrashed since it is one disk rather than six.
  

I did not use raidz.

I have manullay allocated 4 independent disk per volume.

I will try to get few independent disks through few luns.

I would be able to created RAIDZ and try.
Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  
*******************************************************************************************************************************************************************

"The information contained in this email including in any attachment is 
confidential and is meant to be read only by the person to whom it is 
addressed. If you are not the intended recipient(s), you are prohibited from 
printing, forwarding, saving or copying this email. If you have received this 
e-mail in error, please immediately notify the sender and delete this e-mail 
and its attachments from your computer."

*******************************************************************************************************************************************************************
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to