Thanks, Jason.
Here is the info of our server given by Jeff:
1. We set client_max_body_size to 10M in nginx config.
2. The upload folder is /srv/openils/var/tmp which is shared and
accessible on all app servers. It's starting to get full but still has
lots of room available.
3. We set max_stanza_size to 2000000 in ejabberd config.
Quoting Jason Stephenson <[email protected]>:
Hello, all.
Here are a few things to check:
1. If you're using nginx as a proxy, and the upload routinely fails on
large files, check the client_max_body_size parameter in
/etc/nginx/nginx.conf. Ours is set to 25m for 25 megabytes.
2. In /openils/conf/opensrf.xml, lookf for
open-ils.vandelay->app_settings->importer. Make sure that
a. the directory exists,
b. it is hosted on shared storage, and
c. it is accessible on all of your brick heads.
It wouldn't hurt to make sure that it is not full. If it is you can
delete old files and may want to consider setting up a cron job to
routinely remove old files.
3. Check the max_stanza_size in /etc/ejabberd/ejabberd.yml. While it is
supposed to no longer be necessary to change this setting because of
chunking and bundling, there are still some places where chunking and
bundling support is incomplete. If the above two options do not work,
you may want to try setting this to 2097152 or higher. You will need to
restart ejabberd, all OpenSRF servers, and Apache after making this change.
HtH,
Jason
On 2/4/21 11:51 AM, Tina Ji wrote:
I encountered similar issues. It happens randomly with files big or
small. I could not figure a pattern yet. It does not appear to be a data
issue.
1. Uploading stalled. Sometimes it may finish when trying again with a
different MatchSet, not always. Sometimes I had to break the file into
smaller chunks or recompile the file (MARCEdit). Eventually the records
were uploaded and imported.
2. Uploading appears to be complete (100%), but some records are not
loaded into the queue. I had to count the records in a file and compare
it with the number of records in a queue every time.
3. Importing shows as complete, but some records are not imported.
Trying to import those records again within the queue never succeed. (I
encountered another issue here: export non-imported records stopped
working for me. I am not sure it's a local issue or not yet.
T00_MANY_REDIRECT). Uploading and importing those few records in a
separate queue is successful.
We were on 3.5, then recently rolled up to 3.5.1ish.
Quoting Trevor Johnson <[email protected]>:
Not sure why but this issue happened with us when we upgraded our
server to Ubuntu 18.04(bionic). We’re still trying to figure it out.
From: Evergreen-general
<[email protected]> on behalf of Linda
Jansová <[email protected]>
Reply-To: Evergreen Discussion Group
<[email protected]>
Date: Thursday, February 4, 2021 at 2:00 AM
To: "[email protected]"
<[email protected]>
Subject: [Evergreen-general] MARC batch import not processing uploaded
files (in 3.5.1)
Dear all,
We seem to have an issue with the MARC batch import feature in
Evergreen 3.5.1.
To make sure it is not caused by data in incorrect format, we have –
for testing purposes – used a MARCXML file exported from the same
Evergreen instance where we have encountered the issue (the bib record
has been exported using the web client MARC batch export feature).
This XML file is attached.
We have tried to import the file using admin accounts to the following
community/test servers:
* https://demo.evergreencatalog.com/eg/staff/home (3.6.0) – works
okay
* https://bugsquash.mobiusconsortium.org/eg/staff (3.5.0) – works
okay
* our community server
https://evergreendemo.jabok.cuni.cz/eg/staff/ (3.5.1) – does not work
okay
* our another test (3.5.1) – does not work okay
While when it works okay, the file is processed and all three stages
(upload, enqueue and import) progress bars go to a 100 % point, in our
3.5.1 instances we have ended up with the first progress bar. I'm also
attaching a screenshot from the Mobius community test server and from
ours to clarify what I mean.
Using the opensrf log we have been able to find out the cause of the
issue as we were getting error messages like this one:
[2021-02-01 21:34:29] open-ils.vandelay [ERR
:7692:Vandelay.pm:272:1612211178912619] unable to read MARC file
/tmp/56feb3dd5296fc1f8e579bdc8b29dca7.mrc
These point to the 272nd line of Vandelay.pm
(https://git.evergreen-ils.org/?p=Evergreen.git;a=blob;f=Open-ILS/src/perlmods/lib/OpenILS/Application/Vandelay.pm;hb=43a802ae2c56c9342bfd3a6a4556292d00760d3e).
We are unsure whether it may be a permissions issue (if so, does
anyone happen to know a complete list of Evergreen permissions needed
for the MARC batch import to run and go through all the stages of data
processing?) or something else entirely.
Thank you in advance for any advice!
Linda
_______________________________________________
Evergreen-general mailing list
[email protected]
http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-general
--
Tina Ji
1-888-848-9250 ext 1014
Support Specialist
BC Libraries Co-operative
_______________________________________________
Evergreen-general mailing list
[email protected]
http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-general