On 2 March 2010 08:30, Darren Duncan <[email protected]> wrote: > S.A. Kiehn wrote: >> >> I do not see many posts regarding uses of KiokuDB within Catalyst so I was >> curious about the opinion of the community in regards to its usage. Is it >> still to early within development? >> > Well I happen to be strongly opinionated on this topic, so here goes ... > > While these other DBMSs have their uses, I believe that anyone is misguided > who figures they are superior solutions for most uses of relational > databases.
I think you may be right. However I've spent barely 3 or 4 hours looking at how to port an existing application to Moose/KiokuDB and I think it may be an interesting thing to do. Our current logic uses DB tables, with DBIx::Class, and some shims like ::DynamicSubclass. It's powerful, flexible, fast, and very sophisticated for reporting. But really, the more I think about it, our business objects would benefit from living in a treelike structure: a "module" might contain submodules with different business rules etc.. We can cope with the existing logic, but the roadmap of new features contains a few things that would seem to require massive re-engineering of our tables... ... and though I'm sure all this business logic /can/ be represented in a relational DB, it might just require some deep thought. With Moose, you just create your objects and the rules that tie them together; then you plug it into KiokuDB and job done. And when you change your object structure, you don't need to think about how to model it, or how that will affect various other relationships and queries. I added the bulk of one of the "complicated" features I was worrying about representing with DBIC in around 30 minutes. (I'm sure I'm missing various frustrations and there may be pitfalls. On the other hand Yuval and co at iinteractive are Very Clever (TM) so perhaps I'm not?) So my first impression (after a few hours) is that * using Moose + KiokuDB will be a fantastic way to rapid-prototype some complex business requirements * I'll hold judgement on how well it scales until I've done a test with a large sample database. I suspect that starting from known nodes and walking the tree to data I need will be reasonably fast. (That's the major use case). I know KiokuDB is reported to be "fast enough" for small-to-medium datasets, I don't know if anyone's got good benchmarks from large-to stupidly-large datasets, but it would be reassuring to have such data! * On the other hand, I suspect that reporting/random access search will be slower. That's a lesser use case, but still important. But when the object structure is solidified enough to know *what* I need to search/report on, then I can add some stuff to generate some summary tables etc. to query with good ol' DBI -- osf' _______________________________________________ List: [email protected] Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/[email protected]/ Dev site: http://dev.catalyst.perl.org/
