[EMAIL PROTECTED] wrote:
> On Wed, 23 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
>
>   
>> 10,000 x 700 = 7MB per second ......
>>
>> We have this rate for whole day ....
>>
>> 10,000 orders per second is minimum requirments of modern day stock 
>> exchanges ...
>>
>> Cache still help us for ~1 hours, but after that who will help us ...
>>
>> We are using 2540 for current testing ...
>> I have tried same with 6140, but no significant improvement ... only one or 
>> two hours ...
>>     
>
> It might not be exactly what you have in mind, but this "how do I get 
> latency down at all costs" thing reminded me of this old paper:
>
>       http://www.sun.com/blueprints/1000/layout.pdf
>
> I'm not a storage architect, someone with more experience in the area care 
> to comment on this ? With huge disks as we have these days, the "wide 
> thin" idea has gone under a bit - but how to replace such setups with 
> modern arrays, if the workload is such that caches eventually must get 
> blown and you're down to spindle speed ?
>   

Bob Larson wrote that article, and I would love to ask him for an
update.  Unfortunately, he passed away a few years ago :-(
http://blogs.sun.com/relling/entry/bob_larson_my_friend

I think the model still holds true, the per-disk performance hasn't
significantly changed since it was written.

This particular problem screams for a queuing model.  You don't
really need to have a huge cache as long as you can de-stage
efficiently.  However, the original poster hasn't shared the read
workload details... if you never read, it is a trivial problem to
solve with a WOM.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to