Hello,
I'm now thinking there is some _real_ bug in the way zfs handles files systems
created with the pool itself (ie tank filesystem when zpool is tank, usually
mounted as /tank)
My own experiens shows that zfs is unable to send/receive recursively
(snapshots, child fs) properly when the
OK, that was the magic incantation I was looking for:
- changing the noprefetch option opened the floodgates to the L2ARC
- changing the max queue depth relived the wait time on the drives, although
I may undo this again in the benchmarking since these drives all have NCQ
I went from all four
Greetings zfs-discuss@
I was trying to narrow this down for some quite time. The problem is
resides on couple of osol/sxce boxes that are used as dom0 hosts. Under
high disk load on domU guests (backup process for example) domU
performance is terrible. The worst thing is that iostat shows
On Feb 14, 2010, at 9:24 AM, Bogdan Ćulibrk wrote:
Greetings zfs-discuss@
I was trying to narrow this down for some quite time. The problem is resides
on couple of osol/sxce boxes that are used as dom0 hosts. Under high disk
load on domU guests (backup process for example) domU
I'm trying to set up an OpenSolaris 2009.6 server as a Fibre Channel storage
device, and I'm seeing painfully slow performance while copying large
(6-50GB) files -- like 3-5 MB/second over 4Gb FC. However, if instead of
creating a volume and exporting it via FC I create a standard filesystem and
Richard first of all thank you for your time looking into this,
apricieting that.
What are my options from here? To move onto zvol with greater blocksize?
64k? 128k? Or I will get into another trouble going that way when I have
small reads coming from domU (ext3 with default blocksize of 4k)?
Bogdan Ćulibrk b...@default.rs writes:
What are my options from here? To move onto zvol with greater
blocksize? 64k? 128k? Or I will get into another trouble going that
way when I have small reads coming from domU (ext3 with default
blocksize of 4k)?
yes, definitely. have you considered
Hi Dave
So which hard drives are connected to which controllers?
And what device drivers are those controllers using?
The output from 'format', 'cfgadm' and 'prtconf -D'
may help us to understand.
Strange that you say that there are two hard drives
per controllers, but three drives are showing
So which hard drives are connected to which controllers?
And what device drivers are those controllers using?
0. c7t0d0 DEFAULT cyl 7764 alt 2 hd 255 sec 63
/p...@0,0/pci8086,3...@3/pci1000,3...@0/s...@0,0
1. c7t1d0 ATA-Hitachi HDS72202-A20N-1.82TB
Never mind. I have no interest in performance tests for Solaris 10.
The code is so old, that it does not represent current ZFS at all.
Whatever. Regardless of what you say, it does show:
. Which is faster, raidz, or a stripe of mirrors?
. How much does raidz2 hurt
iozone -m -t 8 -T -O -r 128k -o -s 12G
Actually, it seems that this is more than sufficient:
iozone -m -t 8 -T -r 128k -o -s 4G
Good news, cuz I kicked off the first test earlier today, and it seems like
it will run till Wednesday. ;-) The first run, on a single disk, took 6.5
hrs,
On Sun, 14 Feb 2010, Dave Pooser wrote:
c7, c8 and c9 are LSI controllers using the MPT driver. The motherboard has
6 SATA ports which are presented as two controllers (presumably c10 and c11)
one for ports 0-3 and one for ports 4 and 5; both currently use the PCI-IDE
drivers.
One should
Whatever. Regardless of what you say, it does show:
· Which is faster, raidz, or a stripe of mirrors?
· How much does raidz2 hurt performance compared to raidz?
· Which is faster, raidz, or hardware raid 5?
· Is a mirror twice as fast as a single disk for
On Sun, 14 Feb 2010, Edward Ned Harvey wrote:
Never mind. I have no interest in performance tests for Solaris 10.
The code is so old, that it does not represent current ZFS at all.
Whatever. Regardless of what you say, it does show:
Since Richard abandoned Sun (in favor of gmail), he has
c7, c8 and c9 are LSI controllers using the MPT driver. The motherboard has
6 SATA ports which are presented as two controllers (presumably c10 and
c11)
one for ports 0-3 and one for ports 4 and 5; both currently use the PCI-IDE
drivers.
on my motherboard, i can make the onboard sata ports
On Sun, 14 Feb 2010, Edward Ned Harvey wrote:
iozone -m -t 8 -T -O -r 128k -o -s 12G
Actually, it seems that this is more than sufficient:
iozone -m -t 8 -T -r 128k -o -s 4G
Good news, cuz I kicked off the first test earlier today, and it seems like
it will run till Wednesday. ;-)
On Sun, 14 Feb 2010, Thomas Burgess wrote:
Solaris 10 has a really old version of ZFS. i know there are some
pretty big differences in zfs versions from my own non scientific
benchmarks. It would make sense that people wouldn't be as
interested in benchmarks of solaris 10 ZFS seeing as
On Sun, Feb 14, 2010 at 8:49 PM, Thomas Burgess wonsl...@gmail.com wrote:
c7, c8 and c9 are LSI controllers using the MPT driver. The motherboard
has
6 SATA ports which are presented as two controllers (presumably c10 and
c11)
one for ports 0-3 and one for ports 4 and 5; both currently use
On 2/14/10 4:12 PM, Kjetil Torgrim Homme wrote:
Bogdan Ćulibrkb...@default.rs writes:
What are my options from here? To move onto zvol with greater
blocksize? 64k? 128k? Or I will get into another trouble going that
way when I have small reads coming from domU (ext3 with default
blocksize of
On 2/14/10 7:02 PM, zfs ml wrote:
On 2/14/10 4:12 PM, Kjetil Torgrim Homme wrote:
Bogdan Ćulibrkb...@default.rs writes:
What are my options from here? To move onto zvol with greater
blocksize? 64k? 128k? Or I will get into another trouble going that
way when I have small reads coming from
Abdullah,
On Thu, Feb 11, 2010 at 03:42:38PM -0500, Abdullah Al-Dahlawi wrote:
Hi Sanjeev
linking the application to the ARCSTAT_BUMP(arcstat_hits) is not
straightforward and time consuming especially if I am running many
experiments.
Brendan has commented on on the post by providing an
on my motherboard, i can make the onboard sata ports show up as IDE or SATA,
you may look into that. It would probably be something like AHCI mode.
Yeah, I changed the motherboard setting from enhanced to AHCI and now
those ports show up as SATA.
--
Dave Pooser, ACSA
Manager of Information
oh, so i WAS right?
awesome
On Sun, Feb 14, 2010 at 10:45 PM, Dave Pooser dave@alfordmedia.comwrote:
on my motherboard, i can make the onboard sata ports show up as IDE or
SATA,
you may look into that. It would probably be something like AHCI mode.
Yeah, I changed the motherboard
I'm off to straighten out my controller distribution, check to see if I have
write caching turned off on the motherboard ports, install the b132 build,
and possibly grab some dinner while I'm about it. I'll report back to the
list with any progress or lack thereof.
OK, the issue seems to be
On Feb 14, 2010, at 6:45 PM, Thomas Burgess wrote:
Whatever. Regardless of what you say, it does show:
· Which is faster, raidz, or a stripe of mirrors?
· How much does raidz2 hurt performance compared to raidz?
· Which is faster, raidz, or hardware raid 5?
25 matches
Mail list logo