On Fri, 15 Oct 2010, Gerry Bragg wrote:
Is it possible for a read to bypass the write cache and fetch from
disk before the flush of the cache to disk occurs?
No. Zfs is fully coherent in memory. On a server, most accesses are
to the data in memory rather than from disk.
Bob
--
Bob Friese
Hi
so to be absolutely clear
in the same session, you ran an update, commit and select, and the
select returned an earlier value than the committed update?
Things like
ALTER SESSION set ISOLATION_LEVEL = SERIALIZABLE;
will cause a session to NOT see commits from other sessions, but in
Oracle
A customer is running ZFS version15 on Solaris SPARC 10/08 supporting Oracle
10.2.0.3 databases in a dev and production test environment. We have come
across some cache inconsistencies with one of the Oracle databases where
fetching a record displays a 'historical value' (that has been changed
One problem with the write cache is that I do not know if it is needed for
write wearing ?
As mentioned, disabeling write cache might be ok in terms of performance (I
want to use MLC SSD as data disks, not as ZIL, to have a SSD only appliance -
I'm looking for read speed for dedupe, zfs send
-load
because of the TCP-IP calculations?
Or is is safe to ignore the TOE network cards?
Regards,
Armand
- Original Message -
From: Thomas Burgess
To: A. Krijgsman
Cc: zfs-discuss@opensolaris.org
Sent: Monday, January 11, 2010 3:02 AM
Subject: Re: [zfs-discuss] ZFS Cache + Z
On Mon, 11 Jan 2010, Kjetil Torgrim Homme wrote:
(BTW, thank you for testing forceful removal of power. the result is as
expected, but it's good to see that theory and practice match.)
Actually, the result is not "as expected" since the device should not
have lost any data preceding a cache
Maybe it is lost in this much text :) .. thus this re-post
Does anyone know the impact of disabeling the write cache for the write
amplification factor of the intel SSD's ?
How can I permanently disable the write cache on the Intel X25-M SSD's ?
Thanks, Robert
--
This message posted from ope
Lutz Schumann writes:
> Actually the performance decrease when disableing the write cache on
> the SSD is aprox 3x (aka 66%).
for this reason, you want a controller with battery backed write cache.
in practice this means a RAID controller, even if you don't use the RAID
functionality. of course
Hi ;
I dont think that anyone owns the list and as anyone else you are very
welcome to ask any question.
L2arc will cache zpool so if your iscs lun is a zvol or a file on zfs
it will be cached.
Please use constar if you need performance
You are correct that you will only need couple of g
>
> Next to that I am reading all kind of performance benefits using seperate
> devices
> for the ZIL (write) and the Cache (read). I was wondering if I could
> share a single SSD between both ZIL and Cache device?
>
> Or is this not recommended?
>
>
> i asked something similar recently. The answe
Hi all,
Sorry for spamming your mailinglist,
but since I could not find a direct awnser on the internet and archives, I give
this a try!
I am building an ZFS filesystem to export iSCSI LUN's.
Now I was wondering if the L2ARC has the ability to cache non-filesystem iscsi
lun's?
Or does it only
On Sun, 10 Jan 2010, Lutz Schumann wrote:
Talking about read performance. Assuming a reliable ZIL disk (cache
flush = working): The ZIL can guarantee data integrity, however if
the backend disks (aka pool disks) do not properly implement cache
flush - a reliable ZIL device does not "workaroun
Actually the performance decrease when disableing the write cache on the SSD is
aprox 3x (aka 66%).
Setup:
node1 = Linux Client with open-iscsi
server = comstar (cache=write through) + zvol (recordsize=8k, compression=off)
--- with SSD-Disk-write cache disabled:
node1:/mnt/ssd# iozone -
I managed to disable the write cache (did not know a tool on Solaris, hoever
hdadm from the EON NAS binary_kit does the job):
Same power discuption test with Seagate HDD and write cache disabled ...
---
r...@nex
A very interesting thread
(http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/)
and some thinking about the design of SSD's lead to a experiment I did with
the Intel X25-M SSD. The question was:
Is my data safe, once it has reached the di
Pascal Fortin wrote:
Hi all,
I'm looking for a document that explain the working of the zfs cache.
Especially the S10 update6. I need to understand why the utilization
of the cache is not up to the arc size we set. In respect of the
server workload, we expect that the datas are spread across
Hi all,
I'm looking for a document that explain the working of the zfs cache.
Especially the S10 update6. I need to understand why the utilization of
the cache is not up to the arc size we set. In respect of the server
workload, we expect that the datas are spread across the size allowed
for t
Hello zfs-discuss,
I've just did a quick test on scsi cache flushes performance impact on
6540 disk array when using ZFS.
The configuration is: v490, S10U5 (137111-03), 2x 6540 disk
arrays with 7.10.25.10 firmware, host is dual ported. ZFS does
mirroring between 6540s. There is no other load on 6
With the upcoming Thumper server, I understand that there won't be any hardware
RAID. ZFS would be the solution to use on this platform. One apparent use for
this would be an NFS server. But does it really make sense to do this over a
disk cabinet (e.g. a SCSI->SATA) with some sort of write cach
19 matches
Mail list logo