On Thu, 2003-03-27 at 21:42, David Boyes wrote:
But this does not simulate the same effect.
Field testing seems to indicate that there is a noticeable benefit to
this approach.
Is it equivalent to PAV? Maybe not. Perhaps I don't understand PAV well
enough.
I have done some work on this,
David Boyes wrote:
Field testing seems to indicate that there is a noticeable benefit to
this approach.
Could you describe the scenarios where this setup provides benefit?
I guess there might be some benefit in cases where you expect requests
to be satisfied from the MDC frequently, i.e.
Since VM PAV support is still not in the cards, the technique does help
simulate the same effect, and thus, is an improvement over fullpack
minidisks or dedicated volumes.
-- db
VM does not exploit PAV, but a guest can use it. You can create a volume
group with one base device number and some
David Boyes wrote:
My original point was to break up the big minidisks or full
volumes into smaller pieces that you can move around to avoid the
problem of blocking on a single I/O to a device.
Since VM PAV support is still not in the cards, the technique does help
simulate the same effect, and
But this does not simulate the same effect.
Field testing seems to indicate that there is a noticeable benefit to
this approach.
Is it equivalent to PAV? Maybe not. Perhaps I don't understand PAV well
enough.
Can you recommend some further background reading?
-- db
i'm creating an 11 physical volume vg for the datafiles of an oracle
database. i'm curious as to which would serve me better, full pack
minidisks or dedicate the volumes to the guest. one vm expert recommended
dedicating the volumes to the guest, thereby bypassing any vm overhead. at
the same
, March 26, 2003 1:00 PM
To: [EMAIL PROTECTED]
Subject: minidisk vs. dedicate
i'm creating an 11 physical volume vg for the datafiles of an oracle
database. i'm curious as to which would serve me better, full pack
minidisks or dedicate the volumes to the guest. one vm expert
recommended
dedicating
i'm creating an 11 physical volume vg for the datafiles of an oracle
database. i'm curious as to which would serve me better, full pack
minidisks or dedicate the volumes to the guest. one vm
expert recommended
dedicating the volumes to the guest, thereby bypassing any vm
overhead. at
the
i'm sorry if i wasn't clear, they are lvmed together.
-Original Message-
From: David Boyes [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 26, 2003 1:35 PM
To: [EMAIL PROTECTED]
Subject: Re: minidisk vs. dedicate
i'm creating an 11 physical volume vg for the datafiles of an oracle
Yeah, that's what I assumed. I'm suggesting breaking the full volumes
into several smaller parts (say 3 1000 cylinder chunks) and aggregating
the smaller chunks with LVM. You end up with more effective spindles,
which allow more I/Os to be in flight at the same time for the same
filesystem. Works
on 390 Port [mailto:[EMAIL PROTECTED] Behalf Of
McKown, John
Sent: Wednesday, March 26, 2003 2:56 PM
To: [EMAIL PROTECTED]
Subject: Re: minidisk vs. dedicate
David,
One question/observation. Ignoring PAV on the ESS or
equivalent, there can
still be only one physical I/O going to a physical
I always thought, and I may be wrong, that one of the main advantages to having either
a PAV or individual minidisks was so that instead of having 1 queue for the device,
you now have many. If your first I/O in the queue needs to do a physical I/O, then
all the other I/Os wait. If most of the
I always thought, and I may be wrong, that one of the main
advantages to having either a PAV or individual minidisks was
so that instead of having 1 queue for the device, you now
have many.
That's kind of the idea here -- you get sort of a poor-man's PAV via CP
getting its hands on the I/O
Without PAV, having Linux issue 3 I/Os to the same physical
volume will NOT help performance. If one of those I/O must
be satisified by activity to a real disk, all of the remaining I/O
queued on the device will wait, even those with data residing
in cache who could be satisfied immediately.
And
On Wed, Mar 26, 2003 at 04:57:11PM -0800, Barton Robinson wrote:
Without PAV, having Linux issue 3 I/Os to the same physical
volume will NOT help performance. If one of those I/O must
be satisified by activity to a real disk, all of the remaining I/O
queued on the device will wait, even those
15 matches
Mail list logo