my mistake, the system is not a Thumper but rather a 6140 disk array,
using 4xHBA ports on a T2000
I tried several config and from raid (zfs) , raidz and mirror (zfs)
using 8 disks

what I observe is a non-continuous stream of data using [zpool] iostat
so at some stage the IO is interrupted, dropping the MB/s to very low
values, then up again.


On 2/27/07, Richard Elling <[EMAIL PROTECTED]> wrote:
Selim Daoud wrote:
> indeed, a customer is doing 2TB of daily backups on a zfs filesystem
> the throughput doesn't go above 400MB/s, knowing that at raw speed,
> the throughput goes up to 800MB/s, the gap is quite wide

OK, I'll bite.
What is the workload and what is the hardware (zpool) config?
A 400MB/s bandwidth is consistent with a single-threaded write workload.

The disks used in thumper (Hitachi E7K500) have a media bandwidth of
31-64.8 MBytes/s.  To get 800 MBytes/s you would need a zpool setup with
a minimum number of effective data disks of:
        N = 800 / 31
        N = 26

You would have no chance of doing this in a disk-to-disk backup internal
to a thumper, so you'd have to source data from the network.  800 MBytes/s
is possible on the network using the new Neptune 10GbE cards.

You've only got 48 disks to work with, so mirroring may not be feasible
for such a sustained high rate.

> also, sequential IO is a very common in real life..unfortunately zfs
> is not performing well still

ZFS only does sequential writes.  Why do you believe that the bottleneck
is in the memory system?  Are you seeing a high scan rate during the
workload?
  -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to