https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=27365

--- Comment #37 from David Cook <[email protected]> ---
(In reply to Thomas Klausner from comment #36)
> (In reply to Martin Renvoize from comment #35)
> > I suppose it's not as clear cut as we're making out here is it? 
> > MARC::Record allows dealing with ISO 2709 MARC and MARCXML, and those differ
> > in their ability to handle large records.
> 
> Yes, so I think that MARC::File::USMARC should properly handle too large
> data (which MARC::File::XML can handle) via providing an option to truncate
> too large data or die when trying to encode it.

+1

> Koha itself should switch to use MARCXML to store the whole record in the ES
> index (unless we don't need the whole record in the index at all)

Personally, I don't think we need the whole record in the index at all, but
that's arguably a separate issue. 

If we use MARCXML instead of MARC, unless we base64 encode it, I realized there
will be a side-effect there of making the whole record queryable on a keyword
search. (I recently noted that we don't have a "Any" index in ES like we do
with Zebra.)

(In reply to Martin Renvoize from comment #35)
> I suppose it's not as clear cut as we're making out here is it? 
> MARC::Record allows dealing with ISO 2709 MARC and MARCXML, and those differ
> in their ability to handle large records.  If we fix the ISO representation
> to truncate it throw exceptions won't that also affect the MARCXML round
> tripping?   Can we definitely have it work one way for one and another for
> the other?   I suppose we would basically need to ensure the core store is
> in MARCXML and the ISO output is the but that truncated or errors..  but is
> that inside MARC::Record or should that be handled at the Koha layer.. 
> certainly the iso parts should do better at highlighting the issues however,
> and certainly the encoding fun you identified needs fixing.

So MARCXML->MARC->MARCXML record roundtripping for large records is already a
problem, because the process generates invalid MARC records, which can't be
read back into MARCXML. That's the issue we're trying to address. 

If MARC::Record threw an exception for MARCXML->MARC where a valid record can't
be generated, we would've never wound up in the mess we're currently in haha. 

Of course, MARC::Record isn't the only one guilty of this. I'm trying to get
MarcEdit fixed, because I have a different scenario where it produces invalid
MARC records too. 

So in terms of MARC::Record vs Koha... I think MARC::Record should throw an
exception, and Koha should handle that exception. Personally, the only time we
should ever be using ISO MARC is as an export. And in that situation we can say
"Sorry! Unable to export this record as an ISO MARC file due to field size
limitations! Please try another format!" or something like that.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are watching all bug changes.
_______________________________________________
Koha-bugs mailing list
[email protected]
https://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-bugs
website : http://www.koha-community.org/
git : http://git.koha-community.org/
bugs : http://bugs.koha-community.org/

Reply via email to