Just thinking laterally (since I don't have the experience to apply to
the analysis), a practical business solution is to throw hardware at
the problem - particularly for a legacy application on old hardware.
One site I worked at was having issues with the overnight posting of the
days accounts taking upwards of 12 hours. The application had outgrown
its initial design. This was brought down to 15 minutes by upgrading
the RAM and also to Serial Attached SCSI RAID10. The other thing to
consider is the decreasing cost of solid state storage (which I've heard
of good stuff but haven't used myself yet)
http://www.mcobject.com/in_memory_database - non-Oracle but good overview
http://www.dba-oracle.com/t_ram_disk_performance.htm
http://www.ramsan.com/files/oracle_performance_tuning_with_ssd.pdf -
"The dramatic price/performance ratio of SSD is changing the way that
Oracle databases are tunes. Sub-optimal Oracle databases no longer have
to undergo expensive and time-consuming re-design. SSD technology is no
competing head-on with Oracle consulting dervices."
cheers, -ben
Tudor Girba wrote:
Hi,
On 11 Feb 2012, at 21:13, Philippe Marschall wrote:
On 11.02.2012 19:30, Tudor Girba wrote:
Hi,
Sorry for the late reply, and thanks everyone for the suggestions.
I did not provide much details because I am new to this domain and I wanted to
see from the reactions if maybe I am not missing some relevant direction.
We are trying to measure how an Oracle database can cope with an increase in
usage (basically, there will be more users for the application).
Just more users or also more data? Will the users access the database through
one application or does each have it's own application?
Both, but users is the more pressing problem. We already have enough data to
expose problems. It's one application which is a legacy two-tier-Delphi-based
wrapped with a three-tier-JEE.
We are basing our analysis on typical SQL statements coming from the
application. We are currently doing load testing by:
- recording sql statements from some use cases that are considered to be
important
- generalizing them by replacing actual values with generic variables
That shouldn't be necessary, the queries should already contain bind variables
(unless your database layer is crap).
Well, indeed, this should not happen, but legacy is never clean :). Anyway,
there aren't that many cases.
- providing meaningful values for the generic variables
- replaying them against the database from several client machines
- consuming the first record from the responses
Why only the first?
Because we are not interested in the response. Only to check that something
gets returned. Is this a problem?
- reporting the timing of the statements
- recording the CPU, memory and I/O load of the server
Oracle already provides tools for many of these things.
However, I am interested in pitfalls, and in the way people interpret the
results given that it is hard to determine what is a typical usage in terms of
what statements to trigger and at what delays.
The yourself a competent Oracle-DBA and probably sysadmin and storage guy as
well. No seriously, you wouldn't want to have GemStone benchmarked by someone
who has never used Smalltak before, would you?
Thanks, we do have a competent Oracle specialist :). But, this being a tricky
job, I thought of asking around for other experiences.
Cheers,
Doru
Cheers
Philippe
--
www.tudorgirba.com
Innovation comes in least expected form.
That is, if it is expected, it already happened.