David Boyes wrote:

>Except in the case of really large databases where the 31-bit ceiling is
>significant, or when the system is running in a virtual environment that
>does not expose all it's capabilities to the database engine. With
>64-bit clean code, this may be different, but I haven't gotten to beat
>on the 64-bit Oracle for Z code yet.

OK, I can see the benefit of increasing the total amount of memary
available for cache.  However, note that even without having a 64-bit
Oracle, you could still have an equivalent effect by running a 31-bit
Oracle under a 64-bit Linux kernel, and giving that kernel lots of
memory (to be used as page cache).

>The database level caching approach quickly runs into the process
>size/disk buffer utilization problems that grow the virtual machine WSS
>(even with raw I/O, you've just traded the system queuing the buffers
>for the application queuing the buffers), and tends to generate the
>problem of getting stuck in the bottom of E3 with nowhere to go;
>symptom: database goes non-responsive.

Well, this is kind of an unfair comparison.  Of course you'll have to
make the memory that you use for MDC cache in the one scenario available
in full to the guest for its use as cache in the other scenario.

In any case, the original poster was running in LPAR where this issue
doesn't even come up ...

>> In fact, Oracle was one of the folks who lobbied the most for getting
>> 'raw' (uncached by the Linux kernel) device access, and this did in
>> fact turn out to improve Oracle performance (on Linux/Intel).
>
>I would expect it to do so on Intel where there's no effective system
>level disk cache. Since MDC allows Linux I/O to effectively return
>asynchronously (even without async io in the linux driver and is active
>even during the Linux raw disk io) you win some nice gains, esp if the
>disk controllers also have the NVRAM DASDFW turned on.

How is the Linux page/buffer cache any less asynchronous (or any less an
'effective system level disk cache') than VM's MDC?

>> Do you have performance data comparing the scenarios?
>
>Since it involves specific customers, we'll discuss that privately
>off-list, but yes, at least in 1 case so far.

Thanks for the reference.


In any case, the point I was trying to make is that while VM is of
course great for running many guests (and provides a lot of advantages
w.r.t. configuration and system administration etc.), in the usual case
I would not expect a *single* Linux workload to perform *better* when
running under VM as compared to running under LPAR ...

If you can find examples that contradict this, I'd be very interested
to hear about them, as this would to me imply that there's either a
problem in the setup or else a deficiency in the Linux kernel that
we'd need to fix.

Bye,
Ulrich

--
  Dr. Ulrich Weigand
  [EMAIL PROTECTED]

Reply via email to