I'm puzzled by 2 things.
Naively I'd think a write_cache should not help throughput
test since the cache should fill up after which you should still be
throttled by the physical drain rate. You clearly show that
it helps; Anyone knows why/how a cache helps throughput ?
And the second
On Jun 15, 2006, at 06:23, Roch Bourbonnais - Performance Engineering
wrote:
Naively I'd think a write_cache should not help throughput
test since the cache should fill up after which you should still be
throttled by the physical drain rate. You clearly show that
it helps; Anyone knows
Just was on the phone with Andy Bowers. He cleared up that
our SATA device drivers need some work. We basically do not
have the necessary I/O concurrency at this stage. So the
write_cache is actually a good substitute for tag queuing.
So that explain why we get more throughput _on SATA_
The write cache decouples the actual write to disk from the data transfer from
the host. For a streaming operation, this means that the disk can typically
stream data onto tracks with almost no latency (because the cache can aggregate
multiple I/O operations into full tracks which can be
http://www.opensolaris.org/jive/thread.jspa?messageID=36229#36229
The problem is back, on a different system: a laptop running on-20060605 bits.
Compared to snv_29, the error message has improved, though:
# zfs snapshot hdd/[EMAIL PROTECTED]
cannot snapshot 'hdd/[EMAIL PROTECTED]': dataset is
On Thu, Jun 15, 2006 at 10:30:02AM +0800, Freeman Liu wrote:
Hi, guys,
I have add devid support for EFI, (not putback yet) and test it with a
zfs mirror, now the mirror can recover even a usb harddisk is unplugged
and replugged into a different usb port.
But there is still something
Hi,
I'm using solaris express build 40, and i'm trying to share a ZFS(-ed)
3310 array, but it seems it's not up for working and there's some
strange stuff happening in dmesg, anyone have any ideas :
SVC :
bash-3.00# svcs | grep nfs
online 14:06:34 svc:/network/nfs/nlockmgr:default
Roch wrote:
Check here:
http://cvs.opensolaris.org/source/xref/on/usr/src/uts/common/fs/zfs/vdev_disk.c#157
distilled version:
vdev_disk_open(vdev_t *vd, uint64_t *psize, uint64_t *ashift)
/*...*/
/*
* If we own the whole disk, try to enable disk write caching.
* We ignore
On Thu, 15 Jun 2006, Tom Gendron wrote:
I may have missed this somewhere but I don't see a way to make mirrored
stripes. I'm not sure I want or need to in real life, but I am curious.
I don't know if it is possible, but it's certainly not desirable. RAID 1+0
(stripes made from mirrors) has
I've got a pretty dumb question regarding SATA and write cache. I
don't see options in 'format -e' on SATA drives for checking/setting
write cache.
I've seen the options for SCSI driver, but not SATA.
I'd like to help on the SATA write cache enable/disable problem, if I
can.
What am I
10 matches
Mail list logo