--- Comment #112 from David Cook <dc...@prosentient.com.au> ---
(In reply to Andreas Hedström Mace from comment #111)
> Yes, I definitely think this will be good enough for now!!! Getting a better
> matching and/or storing the incoming data to me sounds like future
I spent some time today on these enhancements as they make the import much
faster and more robust*, and I would've needed to do them soon for the RDFXML
OAI-PMH downloads anyway**.
I've ditched the import batches and I'm doing adds/updates/deletes more
directly. If we need to track changes to bibliographic metadata records in
Koha, I think it would make more sense to look at new functionality than
relying on the existing batch system which has issues.
In any case, I'll be working more on this early next week, and hopefully
posting the API code here next week.
*most matching will be done based on OAI-PMH identifier URI and repository URI
in the database, so we won't need to worry about Zebra issues. However, in the
event that there is no matching OAI-PMH identifier and repository URI, the API
can be given an optional matcher code, and Koha's Zebra-based matcher will be
used to find a match. This
**The RDFXML OAI-PMH downloads will make use of this database-based matching,
although if there's no OAI-PMH identifier URI and repository URI for that
downloaded record, it'll be an error state, since only MARCXML can be used with
the Zebra-based matcher. We may have to talk more about that at some point,
although I suppose it's out of the scope of this bug report anyway.
You are receiving this mail because:
You are watching all bug changes.
Koha-bugs mailing list
website : http://www.koha-community.org/
git : http://git.koha-community.org/
bugs : http://bugs.koha-community.org/