If you want to load EAD via the backend API you can use the /jobs_with_files/ endpoint. You send that both JSON (to specify what job to run and give it the necessary parameters) and the EAD for the job to process.

Here is an example a cURL command to start a job to import an EAD file in the current local directory called *myeadfile.xml* to an ArchivesSpace system running on a machine called *myhost*:


curl -H "X-ArchivesSpace-Session: *MYTOKEN*" -F "job={\"jsonmodel_type\": \"job\", \"status\": \"queued\", \"job_type\": \"import_job\", \"job\": {\"jsonmodel_type\": \"import_job\", \"filenames\": [\"*myeadfile.xml*\"], \"import_type\": \"ead_xml\"}}" -F "files[]=@*myeadfile.xml*" http://*myhost*:*8089*/repositories/*2*/jobs_with_files


That assumes the backend is set up to listen (and isn't blocked by a firewall between you and it) on post 8089. And that you want any records created to belong to repository number 2. Change those as appropriate.


*MYTOKEN* is a session token which you can get from the response to this:


read -p "Username: " USER; read -s -p "Password: " PASS; curl -Fpassword=$PASS http://*myhost*:*8089*/users/$USER/login


You could probably specify multiple EAD files, but it is safer to always do one file per import job.


The HTTP response is only the status of the job, usually "Created". It doesn't tell you whether the import succeeds, nor the ID(s) of any records created. I've only ever used this to populate testing systems, when a few failures didn't matter. The jobs and their logs can be viewed in the staff interface.


Andrew.




On 07/05/2020 15:44, James R Griffin III wrote:
I am terribly sorry, I have now found that the payload is a JSON serialization of an array of  resource objects: https://github.com/archivesspace/archivesspace/blob/master/backend/spec/controller_batch_import_spec.rb#L41

Would one then model the payload as one of these arrays of JSON objects, each with a URI internal to the EAD file?
------------------------------------------------------------------------
*From:* James R Griffin III
*Sent:* Thursday, May 7, 2020 10:30 AM
*To:* Archivesspace Users Group <archivesspace_users_group@lyralists.lyrasis.org>
*Subject:* Question Regarding the REST API batch_imports Operation
Hello Everyone,

I have recently been reviewing the documentation for the REST API, and was looking to explore the possible usage of https://archivesspace.github.io/archivesspace/api/#import-a-batch-of-records

Please forgive for my ignorance, but does the body of the POST request contain a payload of EAD XML? Would this be a string of concatenated EAD documents?

Additionally, my understanding is that the response from this request contains an identifier for the created object. As the job for importing the records would be asynchronous, should one poll for the status of this new object by repeatedly transmitting a GET request against https://archivesspace.github.io/archivesspace/api/#find-resources-by-their-identifiers until it has been fully imported?

Thank you for your patience and assistance.

Sincerely,
James

--

my.pronoun.is/he <http://my.pronoun.is/he>

James R. Griffin III

Digital Infrastructure Developer

Princeton University Library <https://library.princeton.edu/>

Princeton, NJ 08544



_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group
_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group

Reply via email to