http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=13019
David Cook <[email protected]> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |[email protected] --- Comment #48 from David Cook <[email protected]> --- Looking over this again... I don't know if we're implementing Koha::Objects in the best way possible. After all, every Koha::Objects subclass is basically just going to consist of the following: 44 sub type { 45 return 'Borrower'; 46 } 47 48 sub object_class { 49 return 'Koha::Borrower'; 50 } That seems like unnecessary duplication in my mind. Why not follow the DBIC model, and go with something like the following? my $Borrowers= Koha::Objects->new('Borrower'); my @objects = $Borrowers->search({surname => 'Jones'}); I suppose the limitation of that is that you can't have custom methods for the group of objects, but would you need them? I imagine all the custom methods are going to be at the individual object level. -- I must admit, after looking through DBIC more and more, I'm less sure of the encapsulation method. I wonder how much functionality we'll use by encapsulating the DBIC objects in Koha objects. Yet, DBIC objects on their own don't have enough functionality to be all we need. Or do they? Maybe we should keep DBIC objects as data objects, and then have other modules/objects for applying business logic to those objects. In terms of CRUD, DBIC is fine the way it is. But let's think about a Koha::Biblio object... it could have methods for importing, exporting, filtering, extracting data from a Koha::Schema::Biblio object or a Koha::Schema::Deletedbiblio object. Koha::Biblio::new($biblionumber); or maybe Koha::Biblio::new($dbic_object); #not dissimilar to using inheritance, except that you can use Biblio or Deletedbiblio in this case my $output = Koha::Biblio::format("opac","marcxml"); #detail view my $output = Koha::Biblio::format("opac","isomarc"); #exporting as isomarc my $output = Koha::Biblio::format("intranet","marcxml"); #detail view my $output = Koha::Biblio::format("intranet","dc"); #detail view as dublin core -- In my latest OAI stuff, I have DBIC objects for OAI records and OAI repositories. I then have a Harvester.pm (which has methods for getting active repositories, querying the repositories, queueing the records, etc.) and an Importer.pm (gets the queued records, analyzes them, transforms metadata, calls the internal C4::Biblio API for adding records to the database). Biblio is actually pretty interesting... because you can get biblios in all sorts of ways. From Zebra, from MySQL, from Z39.50, from OAI, from importing through other tools. You should be able to feed ISOMARC or MARCXML into Koha::Biblio and have that take care of everything thereafter. Actually, Koha::Biblio might not even be the most appropriate name in that case. Maybe it should be Koha::Record, and that becomes the one and only access point for that record. I suppose though... if you were doing a DBIC search, you'd have to feed your result into Koha::Record before you could work on it... and you might be tempted to avoid Koha::Record. Whereas if you're using Koha::Records or Koha::Biblios and that returns the DBIC object wrapped in a Koha object... then you're forced to use that (unless you bypass Koha::Biblios and just use DBIC on its own). But what is a "biblio"... when do we use it... 1) We add biblios to the catalogue. We might do this by hand, by Z39.50, from OAI, or some other means. But basically, we have a metadata record in MARC format. 2) We search for biblios in the catalogue We query Zebra and it returns biblio records. We see a list of search results. 3) We examine individual records We click on the results and look at the records 4) We edit the metadata records We load the MARCXML into a form that can be edited by humans, and then save it again into the relational database (where it gets stored in biblioitems.marcxml, and processed into the biblio and biblioitems tables) 5) We export records (to Zebra, to browsers for download, via OAI, via Z39.50, etc) We output the MARCXML to other sources -- I'm sure there are other use cases of biblios, but the thing I continually notice is that most of these probably wouldn't need the DBIC object. Adding and editing would, as the MARCXML would need to be stored in biblioitems.marcxml, and the MARCXML would need to be parsed into the other database fields. Most of the `biblio` and `biblioitems` fields are calculated fields rather than data in their own right. -- That all being said... I like the idea of the code being more DRY. So maybe it does make sense to have a Koha::Object base class that defines methods for adding, modifying, and deleting database data. Hmm... but I'm not sure that can work either. Thinking about "Biblio" or "Borrower", when you delete one of these, you don't want to just nuke it from the database. You want to move it to the "Deletedbiblio" or "Deletedborrower" then nuke it. I suppose you could always have standard methods and then override them in these cases. Likewise for moving issues into old_issues, reserves into old_reserves, etc. We'd probably always want a single Acquisitions::Budgets object even though there are multiple tables. -- Ultimately... I don't think the approach of wrapping DBIC objects with Koha objects will work. I think it would work in some cases, but I think it wouldn't be flexible enough to accommodate all cases. I think they get a bit repetitive and automagical at times as well, which might be confusing. That said, maybe we should try them out a bit in the wild and see how it goes. -- You are receiving this mail because: You are watching all bug changes. _______________________________________________ Koha-bugs mailing list [email protected] http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-bugs website : http://www.koha-community.org/ git : http://git.koha-community.org/ bugs : http://bugs.koha-community.org/
