Thank you very much for all the suggestions!

As we do not use nginx (we use Apache), we didn't try Jason's first suggestion and proceeded directly to the second one - and it seems to have resolved the issue :-)!

A more detailed report:

We are on Debian 10 without any advanced security settings. It is a one-brick installation and vandelay was originally directed to the original /tmp directory. This didn't work. However, after changing to the newly created directory /openils/temp which is fully owned by the opensrf user, it has started working :-).

Thank you once again!

Linda

On 2/4/21 7:12 PM, Daniel Wells wrote:
Jason's suggestions are very good.  To add to that, I'd be curious whether the unreadable file is there or not, and what the permissions might be.  Since the deletion of that file happens after the point where it errors out for you, the file should still be sitting there if you only have a read problem. If it isn't there, then you probably have a write problem (or a non-global "importer" dir), and your issue is actually earlier.

Naturally, you would want to test this relatively soon after the error occurs.

Sincerely,
Dan


On Thu, Feb 4, 2021 at 12:54 PM Jason Stephenson <ja...@sigio.com <mailto:ja...@sigio.com>> wrote:

    Hello, all.

    Here are a few things to check:

    1. If you're using nginx as a proxy, and the upload routinely fails on
    large files, check the client_max_body_size parameter in
    /etc/nginx/nginx.conf. Ours is set to 25m for 25 megabytes.

    2. In /openils/conf/opensrf.xml, lookf for
    open-ils.vandelay->app_settings->importer. Make sure that

    a. the directory exists,
    b. it is hosted on shared storage, and
    c. it is accessible on all of your brick heads.

    It wouldn't hurt to make sure that it is not full. If it is you can
    delete old files and may want to consider setting up a cron job to
    routinely remove old files.

    3. Check the max_stanza_size in /etc/ejabberd/ejabberd.yml. While
    it is
    supposed to no longer be necessary to change this setting because of
    chunking and bundling, there are still some places where chunking and
    bundling support is incomplete. If the above two options do not work,
    you may want to try setting this to 2097152 or higher. You will
    need to
    restart ejabberd, all OpenSRF servers, and Apache after making
    this change.

    HtH,
    Jason

    On 2/4/21 11:51 AM, Tina Ji wrote:
    >
    > I encountered similar issues. It happens randomly with files big or
    > small. I could not figure a pattern yet. It does not appear to
    be a data
    > issue.
    >
    >
    > 1. Uploading stalled. Sometimes it may finish when trying again
    with a
    > different MatchSet, not always. Sometimes I had to break the
    file into
    > smaller chunks or recompile the file (MARCEdit). Eventually the
    records
    > were uploaded and imported.
    >
    > 2. Uploading appears to be complete (100%), but some records are not
    > loaded into the queue. I had to count the records in a file and
    compare
    > it with the number of records in a queue every time.
    >
    > 3. Importing shows as complete, but some records are not imported.
    > Trying to import those records again within the queue never
    succeed. (I
    > encountered another issue here: export non-imported records stopped
    > working for me. I am not sure it's a local issue or not yet.
    > T00_MANY_REDIRECT). Uploading and importing those few records in a
    > separate queue is successful.
    >
    > We were on 3.5, then recently rolled up to 3.5.1ish.
    >
    >
    >
    > Quoting Trevor Johnson <tjohn...@apls.state.al.us
    <mailto:tjohn...@apls.state.al.us>>:
    >
    >> Not sure why but this issue happened with us when we upgraded our
    >> server to Ubuntu 18.04(bionic). We’re still trying to figure it
    out.
    >>
    >> From: Evergreen-general
    >> <evergreen-general-boun...@list.evergreen-ils.org
    <mailto:evergreen-general-boun...@list.evergreen-ils.org>> on
    behalf of Linda
    >> Jansová <linda.jans...@gmail.com <mailto:linda.jans...@gmail.com>>
    >> Reply-To: Evergreen Discussion Group
    >> <evergreen-general@list.evergreen-ils.org
    <mailto:evergreen-general@list.evergreen-ils.org>>
    >> Date: Thursday, February 4, 2021 at 2:00 AM
    >> To: "evergreen-general@list.evergreen-ils.org
    <mailto:evergreen-general@list.evergreen-ils.org>"
    >> <evergreen-general@list.evergreen-ils.org
    <mailto:evergreen-general@list.evergreen-ils.org>>
    >> Subject: [Evergreen-general] MARC batch import not processing
    uploaded
    >> files (in 3.5.1)
    >>
    >>
    >> Dear all,
    >>
    >> We seem to have an issue with the MARC batch import feature in
    >> Evergreen 3.5.1.
    >>
    >> To make sure it is not caused by data in incorrect format, we
    have –
    >> for testing purposes – used a MARCXML file exported from the same
    >> Evergreen instance where we have encountered the issue (the bib
    record
    >> has been exported using the web client MARC batch export feature).
    >> This XML file is attached.
    >>
    >> We have tried to import the file using admin accounts to the
    following
    >> community/test servers:
    >>
    >>   * https://demo.evergreencatalog.com/eg/staff/home
    <https://demo.evergreencatalog.com/eg/staff/home> (3.6.0) – works
    >> okay
    >>   * https://bugsquash.mobiusconsortium.org/eg/staff
    <https://bugsquash.mobiusconsortium.org/eg/staff> (3.5.0) – works
    >> okay
    >>   *   our community server
    >> https://evergreendemo.jabok.cuni.cz/eg/staff/
    <https://evergreendemo.jabok.cuni.cz/eg/staff/> (3.5.1) – does not
    work
    >> okay
    >>   *   our another test (3.5.1) – does not work okay
    >>
    >> While when it works okay, the file is processed and all three
    stages
    >> (upload, enqueue and import) progress bars go to a 100 % point,
    in our
    >> 3.5.1 instances we have ended up with the first progress bar.
    I'm also
    >> attaching a screenshot from the Mobius community test server
    and from
    >> ours to clarify what I mean.
    >>
    >> Using the opensrf log we have been able to find out the cause
    of the
    >> issue as we were getting error messages like this one:
    >>
    >> [2021-02-01 21:34:29] open-ils.vandelay [ERR
    >> :7692:Vandelay.pm:272:1612211178912619] unable to read MARC file
    >> /tmp/56feb3dd5296fc1f8e579bdc8b29dca7.mrc
    >>
    >> These point to the 272nd line of Vandelay.pm
    >>
    
(https://git.evergreen-ils.org/?p=Evergreen.git;a=blob;f=Open-ILS/src/perlmods/lib/OpenILS/Application/Vandelay.pm;hb=43a802ae2c56c9342bfd3a6a4556292d00760d3e
    
<https://git.evergreen-ils.org/?p=Evergreen.git;a=blob;f=Open-ILS/src/perlmods/lib/OpenILS/Application/Vandelay.pm;hb=43a802ae2c56c9342bfd3a6a4556292d00760d3e>).
    >>
    >>
    >> We are unsure whether it may be a permissions issue (if so, does
    >> anyone happen to know a complete list of Evergreen permissions
    needed
    >> for the MARC batch import to run and go through all the stages
    of data
    >> processing?) or something else entirely.
    >>
    >> Thank you in advance for any advice!
    >>
    >> Linda
    >
    >
    _______________________________________________
    Evergreen-general mailing list
    Evergreen-general@list.evergreen-ils.org
    <mailto:Evergreen-general@list.evergreen-ils.org>
    http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-general
    <http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-general>


_______________________________________________
Evergreen-general mailing list
Evergreen-general@list.evergreen-ils.org
http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-general


_______________________________________________
Evergreen-general mailing list
Evergreen-general@list.evergreen-ils.org
http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-general

Reply via email to