On 12/19/05 2:36 AM, Sam Vilain wrote: > After some investigation, I'm afraid Tangram does not operate in the > manner that this test script requires. It's not really a tool for > mapping an existing database as the test expects.
Yeah, that's what I thought too, but some comments on IRC seemed to indicate otherwise. > I couldn't see any difference between the "complex" and "simple" cases, > apart from the use of "inflate" and "deflate", so the "Complex" schema > for the Tangram use case only uses the schema. Yes, the "complex" case is meant to test the overhead of inflate/deflate in various contexts. > Is the intent just to get all the categories and codenames for products > that have a product name like "Product 200%"? Why not do it with a > single query? In some cases, it is doing it with a single query. The code you see below the query ("foreach my $p (@$ps) ...") is there to confirm that the category data has been fetched. For ORMs that do not support doing this in a single query, that clause will actually fetch the data. For other ORMs, the data will already be available and there'll be no return trip to the database. The code looks the same for all ORMs to make sure the overhead is identical, and to accurately simulate the task of fetching and using this data. > Obviously this new version is likely to be more efficient, depending on > the size of the data set, because a complex search could be reduced from > several queries to one. > > This is one of the major reasons why I've never bothered with optimising > performance for this situation - by coding in the correct manner, the > number of very small database hits is minimised. Then it doesn't really > matter how long they take. I think you misunderstand what the tests are doing. They're meant to highlight exactly the features that you're describing. Fetching objects and some of their associated objects in other tables is a common task. Many of the tests revolve around this task (with various twists). There are other, earlier tests that measure the performance of single-row loads, and so on. > In my experience, the database has always been the bottleneck, apart > from for loader type scripts with large amounts of input. Yes, the db itself will usually be the biggest performance drain. But the point of this benchmark suite is to isolate the overhead caused by the ORM itself. As you can see from the results, there's a wide performance range. And if you look at the results that include raw DBI's performance, the range gets even bigger. http://rose.sourceforge.net/wiki/index.php/RDBO/Benchmark/DBI Anyway, I'll take a look at what you've submitted and I'll see if I can use it to make some sort of reasonable comparison. I'm going to have to change the inflate/deflate stuff to use DateTime instead of Date::Manip because it's important for all the ORMs to use the same kind of object in order to properly isolate the overhead imposed by the ORM itself (instead of mudding the waters by highlighting the differences between the various date modules, which is fodder for another benchmark suite entirely :) -John ------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click _______________________________________________ Rose-db-object mailing list Rose-db-object@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/rose-db-object