Sao Kiselkov writes:
On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
So the xcall are necessary part of memory reclaiming, when one needs to
tear down the TLB entry mapping the physical memory (which can from here
on be repurposed).
So the xcall are just part of this. Should
Scrubs are run at very low priority and yield very quickly in the presence of
other work.
So I really would not expect to see scrub create any impact on an other type of
storage activity.
Resilvering will more aggressively push forward on what is has to do, but
resilvering does not need to
The process should be scalable.
Scrub all of the data on one disk using one disk worth of IOPS
Scrub all of the data on N disks using N disk's worth of IOPS.
THat will take ~ the same total time.
-r
Le 12 juin 2012 à 08:28, Jim Klimov a écrit :
2012-06-12 16:20, Roch Bourbonnais wrote
So the xcall are necessary part of memory reclaiming, when one needs to tear
down the TLB entry mapping the physical memory (which can from here on be
repurposed).
So the xcall are just part of this. Should not cause trouble, but they do. They
consume a cpu for some time.
That in turn can
Edward Ned Harvey writes:
Based on observed behavior measuring performance of dedup, I would say, some
chunk of data and its associated metadata seem have approximately the same
warmness in the cache. So when the data gets evicted, the associated
metadata tends to be evicted too. So
Josh, I don't know the internals of the device but I have heard reports of SSDs
that would ignore flush write cache commands _and_ wouldn't have a supercap
protection (nor battery).
Such devices are subject to dataloss.
Did you also catch this thread
Le 7 févr. 2011 à 06:25, Richard Elling a écrit :
On Feb 5, 2011, at 8:10 AM, Yi Zhang wrote:
Hi all,
I'm trying to achieve the same effect of UFS directio on ZFS and here
is what I did:
Solaris UFS directio has three functions:
1. improved async code path
2. multiple
Le 7 févr. 2011 à 17:08, Yi Zhang a écrit :
On Mon, Feb 7, 2011 at 10:26 AM, Roch roch.bourbonn...@oracle.com wrote:
Le 7 févr. 2011 à 06:25, Richard Elling a écrit :
On Feb 5, 2011, at 8:10 AM, Yi Zhang wrote:
Hi all,
I'm trying to achieve the same effect of UFS directio on ZFS
Brandon High writes:
On Tue, Nov 23, 2010 at 9:55 AM, Krunal Desai mov...@gmail.com wrote:
What is the upgrade path like from this? For example, currently I
The ashift is set in the pool when it's created and will persist
through the life of that pool. If you set it at pool creation,
Ross Walker writes:
On Aug 4, 2010, at 12:04 PM, Roch roch.bourbonn...@sun.com wrote:
Ross Walker writes:
On Aug 4, 2010, at 9:20 AM, Roch roch.bourbonn...@sun.com wrote:
Ross Asks:
So on that note, ZFS should disable the disks' write cache,
not enable them
Le 5 août 2010 à 19:49, Ross Walker a écrit :
On Aug 5, 2010, at 11:10 AM, Roch roch.bourbonn...@sun.com wrote:
Ross Walker writes:
On Aug 4, 2010, at 12:04 PM, Roch roch.bourbonn...@sun.com wrote:
Ross Walker writes:
On Aug 4, 2010, at 9:20 AM, Roch roch.bourbonn...@sun.com wrote
Ross Walker writes:
On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais roch.bourbonn...@sun.com
wrote:
Le 27 mai 2010 à 07:03, Brent Jones a écrit :
On Wed, May 26, 2010 at 5:08 AM, Matt Connolly
matt.connolly...@gmail.com wrote:
I've set up an iScsi volume
, but got no answer: while an
iSCSI target is presented WCE does it respect the flush
command?
Yes. I would like to say obviously but it's been anything
but.
-r
Ross Walker writes:
On Aug 4, 2010, at 3:52 AM, Roch roch.bourbonn...@sun.com wrote:
Ross Walker writes:
On Aug 3
Ross Walker writes:
On Aug 4, 2010, at 9:20 AM, Roch roch.bourbonn...@sun.com wrote:
Ross Asks:
So on that note, ZFS should disable the disks' write cache,
not enable them despite ZFS's COW properties because it
should be resilient.
No, because ZFS builds
Le 27 mai 2010 à 07:03, Brent Jones a écrit :
On Wed, May 26, 2010 at 5:08 AM, Matt Connolly
matt.connolly...@gmail.com wrote:
I've set up an iScsi volume on OpenSolaris (snv_134) with these commands:
sh-4.0# zfs create rpool/iscsi
sh-4.0# zfs set shareiscsi=on rpool/iscsi
sh-4.0# zfs
v writes:
Hi,
A basic question regarding how zil works:
For asynchronous write, will zil be used?
For synchronous write, and if io is small, will the whole io be place on
zil? or just the pointer be save into zil? what about large size io?
Let me try.
ZIL : code and data structure
Can you post zpool status ?
Are your drives all the same size ?
-r
Le 30 mai 2010 à 23:37, Sandon Van Ness a écrit :
I just wanted to make sure this is normal and is expected. I fully
expected that as the file-system filled up I would see more disk space
being used than with other
Robert Milkowski writes:
On 01/04/2010 20:58, Jeroen Roodhart wrote:
I'm happy to see that it is now the default and I hope this will cause the
Linux NFS client implementation to be faster for conforming NFS servers.
Interesting thing is that apparently defaults on Solaris
When we use one vmod, both machines are finished in about 6min45,
zilstat maxes out at about 4200 IOPS.
Using four vmods it takes about 6min55, zilstat maxes out at 2200
IOPS.
Can you try 4 concurrent tar to four different ZFS filesystems (same
pool).
-r
to the IMAP server (called skiplist), and some are small flat files
that are just rewritten. All they have in common is activity and
frequent locking. They can be relocated as a whole.
The second one is from:
http://blogs.sun.com/roch/entry/the_dynamics_of_zfs
He
.
Can one do this with raid-dp ?
http://blogs.sun.com/roch/entry/need_inodes
That said, I truly am for a evolution for random read
workloads. Raid-Z on 4K sectors is quite appealing. It means
that small objects become nearly mirrored with good random read
performance while large objects
Le 5 janv. 10 à 17:49, Robert Milkowski a écrit :
On 05/01/2010 16:00, Roch wrote:
That said, I truly am for a evolution for random read
workloads. Raid-Z on 4K sectors is quite appealing. It means
that small objects become nearly mirrored with good random read
performance while large objects
Tim Cook writes:
On Sun, Dec 27, 2009 at 6:43 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sun, 27 Dec 2009, Tim Cook wrote:
That is ONLY true when there's significant free space available/a fresh
pool. Once those files have been deleted and the blocks put
Le 28 déc. 09 à 00:59, Tim Cook a écrit :
On Sun, Dec 27, 2009 at 1:38 PM, Roch Bourbonnais roch.bourbonn...@sun.com
wrote:
Le 26 déc. 09 à 04:47, Tim Cook a écrit :
On Fri, Dec 25, 2009 at 11:57 AM, Saso Kiselkov
skisel...@gmail.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash
Le 26 déc. 09 à 04:47, Tim Cook a écrit :
On Fri, Dec 25, 2009 at 11:57 AM, Saso Kiselkov
skisel...@gmail.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I've started porting a video streaming application to opensolaris on
ZFS, and am hitting some pretty weird performance
You might try setting zfs_scrub_limit to 1 or 2 and attach a customer
service record to :
6494473 ZFS needs a way to slow down resilvering
-r
Le 7 oct. 09 à 06:14, John a écrit :
Hi,
We are running b118, with a LSI 3801 controller which is connected
to 44 drives (yes it's a
Le 28 sept. 09 à 17:58, Glenn Fawcett a écrit :
Been there, done that, got the tee shirt A larger SGA will
*always* be more efficient at servicing Oracle requests for blocks.
You avoid going through all the IO code of Oracle and it simply
reduces to a hash.
Sounds like good
Bob Friesenhahn writes:
On Wed, 23 Sep 2009, Ray Clark wrote:
My understanding is that if I zfs set checksum=different to
change the algorithm that this will change the checksum algorithm
for all FUTURE data blocks written, but does not in any way change
the checksum for
I wonder if a taskq pool does not suffer from a similar
effect observed for the nfsd pool :
6467988 Minimize the working set of nfsd threads
Created threads round robin our of taskq loop, doing little
work but wake up at least once per 5 minute and so are never
reaped.
-r
Nils
Le 23 sept. 09 à 19:07, Neil Perrin a écrit :
On 09/23/09 10:59, Scott Meilicke wrote:
How can I verify if the ZIL has been disabled or not? I am trying
to see how much benefit I might get by using an SSD as a ZIL. I
disabled the ZIL via the ZFS Evil Tuning Guide:
echo zil_disable/W0t1
Scott Lawson writes:
Also you may wish to look at the output of 'iostat -xnce 1' as well.
You can post those to the list if you have a specific problem.
You want to be looking for error counts increasing and specifically 'asvc_t'
for the service times on the disks. I higher number
Do you have the zfs primarycache property on this release ?
if so, you could set it to 'metadata' or none.
primarycache=all | none | metadata
Controls what is cached in the primary cache (ARC). If
this property is set to all, then both user data and
metadata
100% random writes produce around 200 IOPS with a 4-6 second pause
around every 10 seconds.
This indicates that the bandwidth you're able to transfer
through the protocol is about 50% greater than the bandwidth
the pool can offer to ZFS. Since, this is is not sustainable, you
stuart anderson writes:
Question :
Is there a way to change the volume blocksize
say
via 'zfs snapshot send/receive'?
As I see things, this isn't possible as the
target
volume (including property values) gets
overwritten
by 'zfs receive'.
Unlike NFS which can issue sync writes and async writes, iscsi needs
to be serviced with synchronous semantics (unless the write caching is
enabled, caveat emptor).
If the workloads issuing the iscsi request is single threaded, then
performance is governed by I/O size over rotational
roland writes:
SSDs with capacitor-backed write caches
seem to be fastest.
how to distinguish them from ssd`s without one?
i never saw this explicitly mentioned in the specs.
They probably don't have one then (or they should fire their
entire marketing dept).
Capacitors allows the
Le 5 août 09 à 06:06, Chookiex a écrit :
Hi All,
You know, ZFS afford a very Big buffer for write IO.
So, When we write a file, the first stage is put it to buffer.
But, if the file is VERY short-lived? Is it bring IO to disk?
or else, it just put the meta data and data to memory, and then
Bob Friesenhahn writes:
On Wed, 29 Jul 2009, Jorgen Lundman wrote:
For example, I know rsync and tar does not use fdsync (but dovecot does)
on
its close(), but does NFS make it fdsync anyway?
NFS is required to do synchronous writes. This is what allows NFS
clients to
C. Bergström writes:
James C. McPherson wrote:
An introduction to btrfs, from somebody who used to work on ZFS:
http://www.osnews.com/story/21920/A_Short_History_of_btrfs
*very* interesting article.. Not sure why James didn't directly link to
it, but courteous of Valerie
Henk Langeveld writes:
Mario Goebbels wrote:
An introduction to btrfs, from somebody who used to work on ZFS:
http://www.osnews.com/story/21920/A_Short_History_of_btrfs
*very* interesting article.. Not sure why James didn't directly link to
it, but courteous of Valerie Aurora
Le 4 août 09 à 13:42, Joseph L. Casale a écrit :
does anybody have some numbers on speed on sata vs 15k sas?
The next chance I get, I will do a comparison.
Is it really a big difference?
I noticed a huge improvement when I moved a virtualized pool
off a series of 7200 RPM SATA discs to
Le 26 juil. 09 à 01:34, Toby Thain a écrit :
On 25-Jul-09, at 3:32 PM, Frank Middleton wrote:
On 07/25/09 02:50 PM, David Magda wrote:
Yes, it can be affected. If the snapshot's data structure / record
is
underneath the corrupted data in the tree then it won't be able to
be
reached.
Try
zpool import 2169223940234886392 [storage1]
-r
Le 4 août 09 à 15:11, David a écrit :
I seem to have run into an issue with a pool I have, and haven't
found a resolution yet. The box is currently running FreeBSD 7-
STABLE with ZFS v13, (Open)Solaris doesn't support my raid controller.
Tim Cook writes:
On Tue, Aug 4, 2009 at 7:33 AM, Roch Bourbonnais
roch.bourbonn...@sun.comwrote:
Le 4 août 09 à 13:42, Joseph L. Casale a écrit :
does anybody have some numbers on speed on sata vs 15k sas?
The next chance I get, I will do a comparison
The things I'd pay most attention to would be all single threaded 4K,
32K, and 128K writes to the raw device.
Maybe sure the SSD has a capacitor and enable the write cache on the
device.
-r
Le 5 juil. 09 à 12:06, James Lever a écrit :
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
tester writes:
Hello,
Trying to understand the ZFS IO scheduler, because of the async nature
it is not very apparent, can someone give a short explanation for each
of these stack traces and for their frequency
this is the command
dd if=/dev/zero of=/test/test1/trash
Stuart Anderson writes:
On Jun 21, 2009, at 10:21 PM, Nicholas Lee wrote:
On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson
ander...@ligo.caltech.edu
wrote:
However, it is a bit disconcerting to have to run with reduced data
protection for an entire week. While I
zio_assess went away with SPA 3.0 :
6754011 SPA 3.0: lock breakup, i/o pipeline refactoring, device failure
handling
You now have :
zio_vdev_io_assess(zio_t *zio)
Yes it's one of the last stages of the I/O pipeline (see zio_impl.h).
-r
tester writes:
Hi,
What does
://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
If you do, then be prepared to unmount or reboot all clients of
the server in case of a crash in order to clear their
corrupted caches.
This is in no way a ZIL problem nor a ZFS problem.
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
And most
We're definitely working on problems contributing to such 'picket
fencing'.
But beware to equate symptoms and root caused issues. We already know
that picket fencing is multicause and
we're tracking the ones we know about : there is something related to
taskq cpu scheduling and
something
Le 18 juin 09 à 20:23, Richard Elling a écrit :
Cor Beumer - Storage Solution Architect wrote:
Hi Jose,
Well it depends on the total size of your Zpool and how often these
files are changed.
...and the average size of the files. For small files, it is likely
that the default
Le 16 juin 09 à 19:55, Jose Martins a écrit :
Hello experts,
IHAC that wants to put more than 250 Million files on a single
mountpoint (in a directory tree with no more than 100 files on each
directory).
He wants to share such filesystem by NFS and mount it through
many Linux Debian clients
, it realizes the disk is failed, and
from then enter those failmode conditions (wait, continue, panic, ?).
Could this be the case?
http://blogs.sun.com/roch/date/20080514
--
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss
Hi Noel.
zpool iostat -v
For a working pool and for a problem pool would help to see
the type of pool and it's capacity.
I assume the problem is not the source of the data.
To read large number of small files typically requires lots
and lots of threads (say 100 per source disks).
Is
Le 8 févr. 09 à 13:12, Vincent Fox a écrit :
Thanks I think I get it now.
Do you think having log on a 15K RPM drive with the main pool
composed of 10K RPM drives will show worthwhile improvements? Or am
I chasing a few percentage points?
In cases where logzilla helps, then this
Le 8 févr. 09 à 13:44, David Magda a écrit :
On Feb 8, 2009, at 16:12, Vincent Fox wrote:
Do you think having log on a 15K RPM drive with the main pool
composed of 10K RPM drives will show worthwhile improvements? Or am
I chasing a few percentage points?
Another important question is
Sounds like the device it not ignoring the cache flush requests sent
down by ZFS/zil commit.
If the SSD is able the drain it's internal buffer to flash on a power
outage; then it needs to ignore the cache flush.
You can do this on a per device basis, It's kludgy tuning but hope the
.
Thanks for any pointers you may have...
I think you found out for the replies, this NFS issue is not
related to ZFS nor a ZIL malfunction in any way.
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
NFS (particularly lightly threaded load) is much speeded up
with any form
Nicholas Lee writes:
Another option to look at is:
set zfs:zfs_nocacheflush=1
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
Best option is to get a a fast ZIL log device.
Depends on your pool as well. NFS+ZFS means zfs will wait for write
completes
Eric D. Mudama writes:
On Mon, Jan 19 at 23:14, Greg Mason wrote:
So, what we're looking for is a way to improve performance, without
disabling the ZIL, as it's my understanding that disabling the ZIL
isn't exactly a safe thing to do.
We're looking for the best way to improve
Eric D. Mudama writes:
On Tue, Jan 20 at 21:35, Eric D. Mudama wrote:
On Tue, Jan 20 at 9:04, Richard Elling wrote:
Yes. And I think there are many more use cases which are not
yet characterized. What we do know is that using an SSD for
the separate ZIL log works very well for
Chookiex writes:
Hi all,
I have 2 questions about ZFS.
1. I have create a snapshot in my pool1/data1, and zfs send/recv it to
pool2/data2. but I found the USED in zfs list is different:
NAME USED AVAIL REFER MOUNTPOINT
pool2/data2 160G 1.44T 159G
Tim writes:
On Tue, Jan 13, 2009 at 6:26 AM, Brian Wilson bfwil...@doit.wisc.eduwrote:
Does creating ZFS pools on multiple partitions on the same physical drive
still run into the performance and other issues that putting pools in
slices
does?
Is zfs going to own
milosz writes:
iperf test coming out fine, actually...
iperf -s -w 64k
iperf -c -w 64k -t 900 -i 5
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-899.9 sec 81.1 GBytes774 Mbits/sec
totally steady. i could probably implement some tweaks to improve it, but
Le 13 janv. 09 à 21:49, Orvar Korvar a écrit :
Oh, thanx for your very informative answer. Ive added a link to your
information in this thread:
But... Sorry, but I wrote wrong. I meant I will not recommend
against HW raid + ZFS anymore instead of ... recommend against HW
raid.
The
Le 12 janv. 09 à 17:39, Carson Gaspar a écrit :
Joerg Schilling wrote:
Fabian Wörner fabian.woer...@googlemail.com wrote:
my post was not to start a discuss gplcddl.
It just an idea to promote ZFS and OPENSOLARIS
If it was against anything than against exfat, nothing else!!!
If you
Try setting the cachemode property on the target filesystem.
Also verify that the source can pump data through the net at the
desired rate if the target is /dev/null.
-r
Le 8 janv. 09 à 18:46, gnomad a écrit :
I have just built an opensolaris box (2008.11) as a small fileserver
(6x 1TB
Le 4 janv. 09 à 21:09, milosz a écrit :
thanks for your responses, guys...
the nagle's tweak is the first thing i did, actually.
not sure what the network limiting factors could be here... there's
no switch, jumbo frames are on... maybe it's the e1000g driver?
it's been wonky since
Scott Laird writes:
On Fri, Jan 2, 2009 at 8:54 PM, Richard Elling richard.ell...@sun.com
wrote:
Scott Laird wrote:
On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai
mritun+opensola...@gmail.com wrote:
As for source, here you go :)
Alastair Neil writes:
I am attempting to create approx 10600 zfs file systems across two
pools. The devices underlying the pools are mirrored iscsi volumes
shared over a dedicated gigabit Ethernet with jumbo frames enabled
(MTU 9000) from a Linux Openfiler 2.3 system. I have added a
Marcelo Leal writes:
Hello all,
Somedays ago i was looking at the code and did see some variable that
seems to make a correlation between the size of the data, and if the
data is written to the slog or directly to the pool. But i did not
find it anymore, and i think is way more complex
Any experts here to say if that's just because bonnie via NFSv3 is a
very special test - if it is I can start something else, suggestions? -
or if some disks are really too busy and slowing down the pool.
Here is my attempt :
http://blogs.sun.com/roch/entry/decoding_bonnie
-r
Ahmed Kamal writes:
Hi,
I have been doing some basic performance tests, and I am getting a big hit
when I run UFS over a zvol, instead of directly using zfs. Any hints or
explanations is very welcome. Here's the scenario. The machine has 30G RAM,
and two IDE disks attached. The disks
Le 20 déc. 08 à 22:34, Dmitry Razguliaev a écrit :
Hi, I faced with a similar problem, like Ross, but still have not
found a solution. I have raidz out of 9 sata disks connected to
internal and 2 external sata controllers. Bonnie++ gives me the
following results:
nexenta,8G,
HI Qihua, there are many reasons why the recordsize does not govern
the I/O size directly. Metadata I/O is one, ZFS I/O scheduler
aggregation is another.
The application behavior might be a third.
Make sure to create the DB files after modifying the ZFS property.
-r
Le 26 déc. 08 à 11:49,
Tim writes:
On Sat, Nov 29, 2008 at 11:06 AM, Ray Clark [EMAIL PROTECTED]wrote:
Please help me understand what you mean. There is a big difference between
being unacceptably slow and not working correctly, or between being
unacceptably slow and having an implementation problem that
Bill Sommerfeld writes:
On Wed, 2008-10-22 at 10:30 +0100, Darren J Moffat wrote:
I'm assuming this is local filesystem rather than ZFS backed NFS (which
is what I have).
Correct, on a laptop.
What has setting the 32KB recordsize done for the rest of your home
dir, or did
Le 15 nov. 08 à 08:49, Nicholas Lee a écrit :
On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling [EMAIL PROTECTED]
wrote:
In short, separate logs with rotating rust may reduce sync write
latency by
perhaps 2-10x on an otherwise busy system. Using write optimized SSDs
will reduce sync
Le 22 oct. 08 à 21:02, Bill Sommerfeld a écrit :
On Wed, 2008-10-22 at 09:46 -0700, Mika Borner wrote:
If I turn zfs compression on, does the recordsize influence the
compressratio in anyway?
zfs conceptually chops the data into recordsize chunks, then
compresses
each chunk
Thomas, for long latency fat links, it should be quite
beneficial to set the socket buffer on the receive side
(instead of having users tune tcp_recv_hiwat).
throughput of a tcp connnection is gated by
receive socket buffer / round trip time.
Could that be Ross' problem ?
-r
Ross Smith
Le 23 oct. 08 à 05:40, Constantin Gonzalez a écrit :
Hi,
Bob Friesenhahn wrote:
On Wed, 22 Oct 2008, Neil Perrin wrote:
On 10/22/08 10:26, Constantin Gonzalez wrote:
3. Disable ZIL[1]. This is of course evil, but one customer
pointed out to me
that if a tar xvf were writing locally
Le 2 oct. 08 à 09:21, Christiaan Willemsen a écrit :
Hi there.
I just got a new Adaptec RAID 51645 controller in because the old
(other type) was malfunctioning. It is paired with 16 Seagate 15k5
disks, of which two are used with hardware RAID 1 for OpenSolaris
snv_98, and the rest
Leave the default recordsize. With 128K recordsize, files smaller than
128K are stored as single record
tightly fitted to the smallest possible # of disk sectors. Reads and
writes are then managed with fewer ops.
Not tuning the recordsize is very generally more space efficient and
more
Files are stored as either a single record (ajusted to the size of the
file) multiple number of fixed size records.
-r
Le 25 août 08 à 09:21, Robert a écrit :
Thanks for your response, from which I have known more details.
However, there is one thing I am still not clear--maybe at first
initiator_host:~ # dd if=/dev/zero bs=1k of=/dev/dsk/c5t0d0
count=100
So this is going at 3000 x 1K writes per second or
330usec per write. The iscsi target is probably doing a
over the wire operation for each request. So it looks fine
at first glance.
-r
Cody Campbell writes:
Kyle McDonald writes:
Ross wrote:
Just re-read that and it's badly phrased. What I meant to say is that a
raid-z / raid-5 array based on 500GB drives seems to have around a 1 in 10
chance of loosing some data during a full rebuild.
Actually, I think it's been
Peter Tribble writes:
A question regarding zfs_nocacheflush:
The Evil Tuning Guide says to only enable this if every device is
protected by NVRAM.
However, is it safe to enable zfs_nocacheflush when I also have
local drives (the internal system drives) using ZFS, in particular if
Robert Milkowski writes:
Hello Roch,
Saturday, June 28, 2008, 11:25:17 AM, you wrote:
RB I suspect, a single dd is cpu bound.
I don't think so.
We're nearly so as you show. More below.
Se below one with a stripe of 48x disks again. Single dd with 1024k
block size
Le 28 juin 08 à 05:14, Robert Milkowski a écrit :
Hello Mark,
Tuesday, April 15, 2008, 8:32:32 PM, you wrote:
MM The new write throttle code put back into build 87 attempts to
MM smooth out the process. We now measure the amount of time it
takes
MM to sync each transaction group, and
Bob Friesenhahn writes:
On Tue, 15 Apr 2008, Mark Maybee wrote:
going to take 12sec to get this data onto the disk. This impedance
mis-match is going to manifest as pauses: the application fills
the pipe, then waits for the pipe to empty, then starts writing again.
Note that this
Le 30 mars 08 à 15:57, Kyle McDonald a écrit :
Fred Oliver wrote:
Marion Hakanson wrote:
[EMAIL PROTECTED] said:
I am having trouble destroying a zfs file system (device busy) and
fuser
isn't telling me who has the file open: . . .
This situation appears to occur every night during a
Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
Quick question:
If I create a ZFS mirrored pool, will the read performance get a
boost?
In other words, will the data/parity be read round robin between the
disks, or do both mirrored sets of data and parity get read off of
both
Le 28 févr. 08 à 21:00, Jonathan Loran a écrit :
Roch Bourbonnais wrote:
Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
Quick question:
If I create a ZFS mirrored pool, will the read performance get a
boost?
In other words, will the data/parity be read round robin between
I would imagine that linux to behave more like ZFS that does not flush
caches.
(google Evil zfs_nocacheflush).
If you can nfs tar extract files on linux faster than one file per
rotation latency;
that is suspicious.
-r
Le 26 févr. 08 à 13:16, msl a écrit :
For Linux NFS service, it's a
Bob Friesenhahn writes:
On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
What was the interlace on the LUN ?
The question was about LUN interlace not interface.
128K to 1M works better.
The segment size is set to 128K. The max the 2540 allows is 512K.
Unfortunately
Le 14 févr. 08 à 02:22, Marion Hakanson a écrit :
[EMAIL PROTECTED] said:
It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP.
Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty
handily pull 120MB/sec from it, and write at over 100MB/sec. It
falls
Le 15 févr. 08 à 03:34, Bob Friesenhahn a écrit :
On Thu, 14 Feb 2008, Tim wrote:
If you're going for best single file write performance, why are you
doing
mirrors of the LUNs? Perhaps I'm misunderstanding why you went
from one
giant raid-0 to what is essentially a raid-10.
That
Le 10 févr. 08 à 12:51, Robert Milkowski a écrit :
Hello Nathan,
Thursday, February 7, 2008, 6:54:39 AM, you wrote:
NK For kicks, I disabled the ZIL: zil_disable/W0t1, and that made
not a
NK pinch of difference. :)
Have you exported and them imported pool to get zil_disable into
Le 15 févr. 08 à 11:38, Philip Beevers a écrit :
Hi everyone,
This is my first post to zfs-discuss, so be gentle with me :-)
I've been doing some testing with ZFS - in particular, in
checkpointing
the large, proprietary in-memory database which is a key part of the
application I work
Le 15 févr. 08 à 18:24, Bob Friesenhahn a écrit :
On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
As mentioned before, the write rate peaked at 200MB/second using
RAID-0 across 12 disks exported as one big LUN.
What was the interlace on the LUN ?
The question was about LUN interlace
1 - 100 of 309 matches
Mail list logo