Hello Bob,
Wednesday, July 30, 2008, 3:07:05 AM, you wrote:
BF On Wed, 30 Jul 2008, Robert Milkowski wrote:
Both cases are basically the same.
Please notice I'm not talking about disabling ZIL, I'm talking about
disabling cache flushes in ZFS. ZFS will still wait for the array to
confirm
Hello Bob,
Friday, July 25, 2008, 4:58:54 PM, you wrote:
BF On Fri, 25 Jul 2008, Robert Milkowski wrote:
Both on 2540 and 6540 if you do not disable it your performance will
be very bad especially for synchronous IOs as ZIL will force your
array to flush its cache every time. If you are
On Wed, 30 Jul 2008, Robert Milkowski wrote:
Both cases are basically the same.
Please notice I'm not talking about disabling ZIL, I'm talking about
disabling cache flushes in ZFS. ZFS will still wait for the array to
confirm that it did receive data (nvram).
So it seems that in your
Dear All,
I will try to post DTool source code asap
DTool is depend on our patented middleware, need one or two days to
clarify :-P
Very Sorry.
Bob,
I have tried your pdf but did not get good latency numbers even after
array tuning...
cheers
tharindu
Bob Friesenhahn wrote:
On
On Mon, 28 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
I have tried your pdf but did not get good latency numbers even after array
tuning...
Right. And since I observed only slightly less optimal performance
from a mirror pair of USB drives it seems that your requirement is not
Deal All,
Thank you very much for the continuous support
Sorry for the late reply ...
I was trying to allocate 2540 2 x 4600 for try out your
recommendation ...
Finally, I could reserve 2540 disk array for testing purpose.
so I am free to try out each and every points you have
On Sat, 26 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
1.Re configure array with 12 independent disks
2. Allocate disks to RAIDZed pool
Using raidz will penalize your transaction performance since all disks
will need to perform I/O for each write. It is definitely better to
use
On Sat, 26 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
It is impossible to simulate my scenario with iozone. iozone performs very
well for ZFS. OTOH,
iozone does not measure latency.
Please find attached tool (Solaris x86), which we have written to measure
latency.
Very interesting
On Sat, 26 Jul 2008, Bob Friesenhahn wrote:
I suspect that the maximum peak latencies have something to do with
zfs itself (or something in the test program) rather than the pool
configuration.
As confirmation that the reported timings have virtually nothing to do
with the pool
Bob Friesenhahn wrote:
On Sat, 26 Jul 2008, Bob Friesenhahn wrote:
I suspect that the maximum peak latencies have something to do with
zfs itself (or something in the test program) rather than the pool
configuration.
As confirmation that the reported timings have virtually
On Sat, 26 Jul 2008, Richard Elling wrote:
Is it doing buffered or sync writes? I'll try it later today or
tomorrow...
I have not seen the source code but truss shows that this program is
doing more than expected such as using send/recv to send a message.
In fact, send(), pollsys(), recv(),
-code] Peak every 4-5 second
Bob Friesenhahn wrote:
On Sat, 26 Jul 2008, Bob Friesenhahn wrote:
I suspect that the maximum peak latencies have something to do with
zfs itself (or something in the test program) rather than the pool
configuration.
As confirmation that the reported
Hello Tharindu,
Wednesday, July 23, 2008, 10:03:15 AM, you wrote:
10,000 x 700 = 7MB per second ..
We have this rate for whole day
10,000 orders per second is minimum requirments of modern day stock exchanges ...
Cache still help us for ~1 hours, but after that who will
Hello Tharindu,
Thursday, July 24, 2008, 6:02:31 AM, you wrote:
We do not use raidz*. Virtually, no raid or stripe through OS.
We have 4 disk RAID1 volumes. RAID1 was created from CAM on 2540.
2540 does not have RAID 1+0 or 0+1.
Of course it does 1+0. Just add more drives to
On Fri, 25 Jul 2008, Robert Milkowski wrote:
Both on 2540 and 6540 if you do not disable it your performance will
be very bad especially for synchronous IOs as ZIL will force your
array to flush its cache every time. If you are not using ZFS on any
other storage than 2540 on your servers
And do you really have 4-sided raid 1 mirrors, not 4-wide raid-0 stripes???
--dave
Robert Milkowski wrote:
Hello Tharindu,
Thursday, July 24, 2008, 6:02:31 AM, you wrote:
We do not use raidz*. Virtually, no raid or stripe through OS.
We have 4 disk RAID1 volumes.
On Fri, Jul 25, 2008 at 9:17 AM, David Collier-Brown [EMAIL PROTECTED] wrote:
And do you really have 4-sided raid 1 mirrors, not 4-wide raid-0 stripes???
Or perhaps 4 RAID1 mirrors concatenated?
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
We do not use raidz*.
Virtually, no raid or stripe through OS.
We have 4 disk RAID1 volumes. RAID1 was created from CAM on 2540.
2540 does not have RAID 1+0 or 0+1.
cheers
tharindu
Brandon High wrote:
On Tue, Jul 22, 2008 at 10:35 PM, Tharindu Rukshan Bamunuarachchi
[EMAIL PROTECTED]
On Wed, Jul 23, 2008 at 10:02 PM, Tharindu Rukshan Bamunuarachchi
[EMAIL PROTECTED] wrote:
We do not use raidz*. Virtually, no raid or stripe through OS.
So it's ZFS on a single LUN exported from the 2540? Or have you
created a zpool from multiple raid1 LUNs on the 2540?
Have you tried
providing SSD arrays ??
- Original Message -
From: [EMAIL PROTECTED] [EMAIL PROTECTED]
To: Tharindu Rukshan Bamunuarachchi [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org zfs-discuss@opensolaris.org
Sent: Wed Jul 23 11:22:51 2008
Subject: Re: [zfs-discuss] [zfs-code] Peak every 4-5 second
Hmmn, that *sounds* as if you are saying you've a very-high-redundancy
RAID1 mirror, 4 disks deep, on an 'enterprise-class tier 2 storage' array
that doesn't support RAID 1+0 or 0+1.
That sounds weird: the 2540 supports RAID levels 0, 1, (1+0), 3 and 5,
and deep mirrors are normally only
On Thu, 24 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
We do not use raidz*. Virtually, no raid or stripe through OS.
We have 4 disk RAID1 volumes. RAID1 was created from CAM on 2540.
What ZFS block size are you using?
Are you using synchronous writes for each 700byte message? 10k
On Thu, 24 Jul 2008, Brandon High wrote:
Have you tried exporting the individual drives and using zfs to handle
the mirroring? It might have better performance in your situation.
It should indeed have better performance. The single LUN exported
from the 2540 will be treated like a single
On Thu, 24 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
Do you have any recommend parameters should I try ?
Using an external log is really not needed when using the StorageTek
2540. I doubt that it is useful at all.
Bob
==
Bob Friesenhahn
[EMAIL
Hello Tharindu,
Wednesday, July 23, 2008, 6:35:33 AM, you wrote:
TRB Dear Mark/All,
TRB Our trading system is writing to local and/or array volume at 10k
TRB messages per second.
TRB Each message is about 700bytes in size.
TRB Before ZFS, we used UFS.
TRB Even with UFS, there was evey 5
10,000 x 700 = 7MB per
second ..
We have this rate for whole day
10,000 orders per second is minimum requirments of modern day stock
exchanges ...
Cache still help us for ~1 hours, but after that who will help us ...
We are using 2540 for current testing ...
I have tried same with
txt_time/D
mdb: failed to dereference symbol: unknown symbol name
txg_time/D
mdb: failed to dereference symbol: unknown symbol name
Am I doing something wrong
Robert Milkowski wrote:
Hello Tharindu,
Wednesday, July 23, 2008, 6:35:33 AM, you wrote:
TRB Dear Mark/All,
TRB Our
On Wed, 23 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
10,000 x 700 = 7MB per second ..
We have this rate for whole day
10,000 orders per second is minimum requirments of modern day stock exchanges
...
Cache still help us for ~1 hours, but after that who will help us ...
On Wed, 23 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
10,000 x 700 = 7MB per second ..
We have this rate for whole day
10,000 orders per second is minimum requirments of modern day stock exchanges
...
Cache still help us for ~1 hours, but after that who will help us ...
;-)
Thanks -- mikee
- Original Message -
From: [EMAIL PROTECTED] [EMAIL PROTECTED]
To: Tharindu Rukshan Bamunuarachchi [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org zfs-discuss@opensolaris.org
Sent: Wed Jul 23 11:22:51 2008
Subject: Re: [zfs-discuss] [zfs-code] Peak every 4-5 second
[EMAIL PROTECTED] wrote:
On Wed, 23 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
10,000 x 700 = 7MB per second ..
We have this rate for whole day
10,000 orders per second is minimum requirments of modern day stock
exchanges ...
Cache still help us for ~1 hours, but
On Tue, Jul 22, 2008 at 10:35 PM, Tharindu Rukshan Bamunuarachchi
[EMAIL PROTECTED] wrote:
Dear Mark/All,
Our trading system is writing to local and/or array volume at 10k
messages per second.
Each message is about 700bytes in size.
Before ZFS, we used UFS.
Even with UFS, there was evey 5
Dear Mark/All,
Our trading system is writing to local and/or array volume at 10k
messages per second.
Each message is about 700bytes in size.
Before ZFS, we used UFS.
Even with UFS, there was evey 5 second peak due to fsflush invocation.
However each peak is about ~5ms.
Our application can
33 matches
Mail list logo