es R Griffin III
Reply-To: Archivesspace Users Group
Date: Thursday, May 7, 2020 at 10:44 AM
To: Archivesspace Users Group
Subject: Re: [Archivesspace_Users_Group] Question Regarding the REST API
batch_imports Operation
I am terribly sorry, I have now found that the payload is a JSON serializat
If you want to load EAD via the backend API you can use the
/jobs_with_files/ endpoint. You send that both JSON (to specify what job
to run and give it the necessary parameters) and the EAD for the job to
process.
Here is an example a cURL command to start a job to import an EAD file
in the
(In the backend:)
EADConverter takes an EAD file (.xml) and produces a JSON file on resources:
archival_objects, a parent resource record and possibly global subjects and
agents. I’m not sure if there is a backend API call to do just this initial
step, but you can run it from pry/irb with
I am terribly sorry, I have now found that the payload is a JSON serialization
of an array of resource objects:
https://github.com/archivesspace/archivesspace/blob/master/backend/spec/controller_batch_import_spec.rb#L41
Would one then model the payload as one of these arrays of JSON objects,
Hello Everyone,
I have recently been reviewing the documentation for the REST API, and was
looking to explore the possible usage of
https://archivesspace.github.io/archivesspace/api/#import-a-batch-of-records
Please forgive for my ignorance, but does the body of the POST request contain
a