On 12/19/05 5:48 PM, Perrin Harkins wrote:
> On Mon, 2005-12-19 at 17:30 -0500, John Siracusa wrote:
>> Pass the "--time" and (optionally) the "--hi-res-time" flags to bench.pl to
>> see a wall clock comparison.
>
> Are you sure? I looked at Benchmark a while back trying to find a way
> to measure wall time with cmpthese() and couldn't find one. I don't
> think it does wall time comparisons in the iterations/second stats.
If the number of iterations is fixed, you get a usable wallclock time in the
results. Example:
# Simple: insert 1
Benchmark: timing 3000 iterations of DBIC, RDBO...
DBIC: 8.26625 wallclock secs ( 3.31 usr + 0.49 sys = 3.80 CPU) @
789.47/s (n=3000)
RDBO: 4.74367 wallclock secs ( 1.61 usr + 0.26 sys = 1.87 CPU) @
1604.28/s (n=3000)
If, however, the test is set to run for some minimum number of CPU seconds,
you have to look at the number of iterations and do some math of your own:
# Simple: search with limit and offset
Benchmark: running DBIC, RDBO for at least 5 CPU seconds...
DBIC: 15.4103 wallclock secs ( 4.85 usr + 0.57 sys = 5.42 CPU) @
103.69/s (n=562)
RDBO: 14.6284 wallclock secs ( 4.94 usr + 0.47 sys = 5.41 CPU) @
161.00/s (n=871)
Here RDBO ran for 871 iterations before meeting or exceeding the 5 CPU
second limit, while DBIC went through only 562. So you'd have to take that
into account when comparing the wallclock times.
>> Anyway, the way the bench is designed, the i/o wait and other db overhead
>> should be exactly equal for each ORM . They're all using the same db, and
>> they all have their own private subset of rows, in an attempt to defeat OS
>> and db caching effects across ORMs.
>
> The thing I had in mind here is how some of them make more fetches than
> they ideally should for complex queries, which is a major source of
> performance drain in Class::DBI. I suppose it's hard to tell what
> should be measured here, since complex queries in Class::DBI typically
> mean hand-coded SQL or else manipulation in Perl, and anyone concerned
> about performance would probably choose the former.
Well, using an ORM is supposed to save you from writing your own SQL. Once
you open that door, where do you draw the line?
Anyway, the fact that working around the feature/performance deficiencies of
a particular ORM is "common practice" is hardly a convincing endorsement :)
In the particular case of fetching related objects, many ORMs do include
this feature, and users like it and use it. This is what people are using
an ORM for: abstraction.
That said, if you're not interested in a particular test because you don't
use the ORM to do that particular task (or "barely" use the ORM, a la
writing custom SQL), just ignore those results. Each test is meant to
highlight a certain performance characteristic. There's no "overall score"
intentionally. The bench.pl script is merely a set of individual tests.
The developer must decide how relevant each one is to his app.
-John
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
Rose-db-object mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/rose-db-object