2007/6/6, Döhr, Markus ICC-H <[EMAIL PROTECTED]>:
[...]
> Well, not exactly: it's significant faster on SQL Server *and* Oracle.
That's a hard statement ;) Did you (or someone else) try to find out the reason
for this?
Well, we tried but it's difficult with the given tools. For example,
at times we experienced a high startup time but were not able to
detect which SQL was responsible for this. IMHO that is actually one
of the weak spots of MaxDB: with SQL Server I start the profiler and
get timings and execution plans, with Oracle I do a session or DB
trace and examine with tkprof, look at trace files directly or even
use AWR etc. With MaxDB it is really hard to connect performance
figures to SQL statements.
MaxDB comes by standard installation with a configuration, that is suitable for
"usual R/3 OTLP" configurations, this may not be the right one for your
environment. If the differences are that significant, I'm sure it would be of interest,
why it behaves like this.
Certainly. But there is the crucial T point below. I also had a
conversation with Joerg end of 2005 who seemed to be interested to
learn our issues but when I got back at him I did not get a reply so I
stopped asking. Unfortunately (for you) I will be leaving the company
I currently work for soon and thus I don't actually have the resources
to dig further into this. Also, the product we are talking about is
phased out soon.
Basically the schema is pretty much a standard snowflake with only few
additional indexes. Dimensions are filled via SP calls and fact tables
are loaded through loadercli. Queries typically combine data from
larger time frames (say a month) and sometimes involve inline views
that do totals calculations (to be able to report percentages). If
you are interested in more detail please contact me offlist; maybe I
can connect you to someone else who might be able to provide you with
a demo license and further information so you can investigate in your
lab.
[...]
> True, but there are scenarios where you do not want to lump everything
> in one single SAN because you want to reserve part of the IO bandwidth
> for particular tasks (parts of the schema). Or you want to be able to
> take individual tablespaces offline and manipulate them independently.
We have actually two SAN systems but we put the production OLTP, the production
BW and production CRM on one box - and still don't reach the cricitical I/O
bandwidth where one system slows down the other, even during usual working days
where all three systems run on their usual load we don't reach that point (yet).
That's good for you. We OTOH have a large Oracle install at a
customer site that uses a NetApp filer and is slow - so far Oracle
experts have not been able to optimize it. So it's not all sunshine
with the other databases as well (in this case it's Oracle 10g
choosing a suboptimal plan). We also have another customer with a
really large SQL 2k (3TB I believe) and they actually ran into a bug
in the database. I was amazed because I had not expected that. Also I
learned along the course that SQL Server 2k apparently is not using
native threads internally.
> I understand the reasoning behind MaxDB's approach to distribute data
> evenly across volumes but this only works good if you have them on
> physically separate disks. Even in that case you might suffer major
> data loss if one volume is corrupt. I may be missing something but I
> am not aware of a way to extend a volume if you need more space. And
> that seems a fairly common thing to do - especially with SAN's which
> can grow seamlessly. Then again you might say that multiple volumes on
> a single SAN are not a performance issue because there is no
> hotspot...
You can't "grow" a volume but you can add another one; the system tries then to
evenly distribute the data again across all volumes. In low I/O times, the DB also moves
the blocks to the new volume so you will end finally with database of even distributed
data across all volumes. This is true for usual OLTP databases, for OLAP this concept has
changed with the implementation for the BI feature pack (as far as I technically
understood the details).
But if all volumes reside on a single physical disk (assuming no SAN,
but local disk or RAID) then even distribution might actually hurt
performance.
[...]
> The difference here is that you do not have to migrate from 8 to 10 as
> often as you have to migrate from MaxDB 7.5 to newer versions.
You usually migrate once from 7.5 to 7.6 ;)
Well, yes. But I have the impression that it is more often necessary
to migrate to a new MaxDB version to get rid of bugs than it is with
other RDBMS's.
I think, that one of the main reasons, why companies decided to use Oracle is: Oracle is
well known, widely accepted and the de-facto standard database. If someone chooses a
different database, the (I call it) ATV (accepted tolerance value), is MUCH lower with
other databases, although many problems, whatever kind, could certainly be solved with an
appropriate amount of time. If you have a problem with MaxDB, everyone would say "If
we were staying with Oracle...". If Oracle has a big problem in whatever area, you
just live with it - because it's Oracle. You take into account and install dozens of
patches on your systems, hire additional DBAs just because it's like it is, and it's
accepted like this.
There is definitively truth in this. The critical T part here is "...
with an appropriate amount of time". But I also have to say we never
had issues like wrong query results with any of the Oracle and SQL
Server versions we worked with...
I'm sure that, if you would invest the same amount of time initially for a
MaxDB migration as you needed to get your Oracle databases installed, patched
and tuned, you would get almost similar results.
Your points are really interesting, it's not about a flame war here but
interesting, why one chose to not use MaxDB.
I never read / meant it that way.
Kind regards
robert
--
MaxDB Discussion Mailing List
For list archives: http://lists.mysql.com/maxdb
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]