On 11 Aug 2007, at 02:10, Martin Flack wrote:

Kieren, thanks for these comments.

The bulk of our work is on citation/import/export, speed optimization, user features, etc. so I am not averse at all to having some of the structural points reexamined, and you're probably correct that we haven't thought about them questioningly in a long time. If this is an issue for newcomers I'd like to be aware of it.

I had a good long look at connotea about a year ago, but with my limited time, it was just too hard for me. I would really like to use it again for a new project now, but I can't make full use of it for now, for the reasons I gave before.

I still really want to persuade you to refactor to DBIx::Class, and I think that this process may even be quicker than refactoring the cdbi code on its own, and certainly more developer/community friendly. Bear in mind that many DBIx::Class developers are former cdbi developers and its design is informed by a knowledge of cdbi's limitations. In connotea the model is the largest software problem to be solved, and while you've solved it well in many ways, it's very hard for casual developers to pick it up and run with it. I think DBIx::Class would provide much of a solution to this - the biggest feature in my book being DBIC_TRACE=1 myscript.pl producing all the sql used to STDERR

I've attached a tarball of some DBIC/Catalyst::Model code from the LinkMine (social bookmarking) project which I referred to in an earlier email. It's not working code as such, and I'm not completely convinced that it follows best practice in all cases, but it shows the general idea. I've also greped for all instances where the model is called in the controller code too. I had to do some slightly exotic stuff to get a mysql database schema. Basically I took the DBIx::Class schema declared in LinkMSchema (which was written for postgres originally) and ran the following script:

#!/usr/bin/perl
use warnings; use strict;
use LinkMSchema; # DBIx::Class schema
system('sqlite3 db'); # eugh, hack needs connect_info a blank sqlite db will do
my $schema = LinkMSchema->connect( "dbi:SQLite:db");
$schema->create_ddl_dir('MySQL', undef, './');

which provides me with the sql for a mysql schema. I did lose a couple of triggers, but that's not important for illustrative purposes.

I found the horrible query you mentioned in Bibliotech::DB.pm . You don't have to work around the limitations of DBIx::Class to implement this, it knows it's own limitations. Here's the perldoc that sums up the situation with Connotea perfectly: http://search.cpan.org/ ~mstrout/DBIx-Class-0.08005/lib/DBIx/Class/Manual/ Cookbook.pod#Arbitrary_SQL_through_a_custom_ResultSource

Back in the day when I was using CDBI I didn't use it much because I found it so hard going, with uninformative errors, and documentation that I found difficult to follow. It also has a habit of deep resucsion - you can demonstrate this with the connotea code base by adding Sub::WrapPackages into an appropriate place for debugging (I put it in Bibliotech::Apache). Then DBIx::Class came along which has learned from the mistakes of CDBI. Bear in mind my database usage is generally pretty limited, with little or no need for optimisation on my side so a lot of the DBIC code I write looks like this (in memory loading of the schema):

#!/usr/bin/perl -w
use DBIx::Class::Schema::Loader qw/make_schema_at/;
make_schema_at( 'MyDB', {relationships =>1, debug =>0, auto_update =>1},
                [ 'dbi:mysql:database=mydb', 'user', 'pass'],);
my $schema= MyDB->connect();
my $rs = $schema->resultset('Table')->search({});
while (my $rec = $rs->next()) {
    # do stuff
}


2. Taking your dispatcher code, getting it working independent of apache (basically factoring out $r into utility functions). Next up, delegating the dispatching logic and content generation logic below and bending it into catalyst logic. I really think this isn't going hard for someone familiar with the codebase, although it would be tricky for me. What this buys you is a clearly understood dispatch process that's very easily understood by a large and active developer community.

$r is basically only handled in Apache.pm. You can write a test script that creates a Bibliotech::Command or uses Bibliotech::Parser to create one. Bibliotech::Fake was created for this purpose.

Having said that, you are correct it would be good to add more abstraction to make this easier, and make command-line uses easier.


Indeed. Also any example command line scripts would be much appreciated this would also help to begin to address the mod_perl dependency, which is a problem for widespread use of your code. The mod_perl dependency is a show stopper for me - if I get a working standalone model with example code, I'm much more likely to hack at it (although CDBI still leaves me inclined to avoid unless there's lots of clear example code).


things like using a cache engine ie memcached should be transparent and trivial to remove for users who don't need that feature with DBIC (c.f. tight couplling and real maintenance headache for the equivalent CDBI code).

It is abstracted in Bibliotech::Cache so maybe just a Bibliotech::Cache::None or Bibliotech::Cache::Memory gets added to the project? A little bit of the caching is useful in the same request.

If you're going to do that, make life easy on your developers and aim for total transparrency - so a null cache, then a slightly lighter cache if that proves necessary. Then memcached.


Do you happen to know of any public Catalyst projects that are required to use server-wide caching to handle the traffic load? So we could see their programming model for the cache interface? That would be interesting.


There are a few catalyst plugins on CPAN that provide simple caching. These can be modified for more complex use-cases of course. These include component caching (see Catalyst::Plugin::Cache which has a bare bones catalyst app as part of its test suite), page cache (Catalyst::Plugin::PageCache - again with good app based test coverage). Catalyst::Plugin::Cache::Memcached lacks this unfortunately. The core of caching with DBIx::Class (with the swappable backends) is DBIx::Class::Cursor::Cached. TT also does some memory caching of its own.

Something else that catalyst buys you is the ready-rolled ability to distribute your app as a CPAN dist. This makes installation pretty easy assuming you have your Makefile.PL set out properly:

$ perl Makefile.PL
$ make installdeps # or sudo make installdeps

Also as far as catalyst goes, there's a well developed set of (again multiple backend) authentication/authorisation code. However it's obvious that the data store for your auth/authz is in mysql.

(mail cc'd to your personal address in case the tarball gets stripped by sourceforge).

Kieren


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
Connotea-code-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/connotea-code-devel

Reply via email to