Re: [Archivesspace_Users_Group] Checking for Broken URLs in Resources

2021-02-10 Thread David W. Hodges
Corey,

The process Kevin mentioned is in our repo here:

https://github.com/cul/rbml-archivesspace/tree/master/ead_link_checker

I think this is a few steps short of what you have in mind but maybe it
will give you some code snippets to adapt. As Kevin mentioned, we have our
entire corpus exported to EAD daily to back our publishing platform, so it
makes an easy source to mine data via XSLT. Given a folder of EAD, the XSLT
pulls out the xlink info, and the Python script does status checks and
reports the results to a spreadsheet. As we have many links to dois and
resolvers, the redirect locations and status checks were useful. We did
this as a one-time audit rather than a continuous monitor, but I could see
how one might automate a link checker to report problems as they come up.

I'd be happy to discuss further if it is helpful. Best of luck!

David

On Wed, Feb 10, 2021 at 10:44 AM Corey Schmidt 
wrote:

> Kevin,
>
> I'd be very interested to get your code, especially with 301 and 302
> redirects. My initial runs have resulted in redirects stalling the status
> code response, preventing my program from moving forward. Getting the
> status code of the link being redirected to would be a big help.
>
> I should also mention that any solution I make needs to be replicable to
> other faculty/staff in our library - so access to the database may not be
> consistent in the future. I'm thinking of a scenario where 5 years from
> now, some people may want to run this report again but I may not be working
> for the library anymore. If that's the case, working to export from
> ArchivesSpace might be the better long-term, no-fusses solution.
>
> Corey
> --
> *From:* archivesspace_users_group-boun...@lyralists.lyrasis.org <
> archivesspace_users_group-boun...@lyralists.lyrasis.org> on behalf of
> Kevin W. Schlottmann 
> *Sent:* Wednesday, February 10, 2021 10:27 AM
> *To:* Archivesspace Users Group <
> archivesspace_users_group@lyralists.lyrasis.org>
> *Subject:* Re: [Archivesspace_Users_Group] Checking for Broken URLs in
> Resources
>
> [EXTERNAL SENDER - PROCEED CAUTIOUSLY]
>
> Hi Corey,
>
> Earlier this year we did something similar. We started by extracting all
> xlink:href from the EAD corpus using xslt. (EAD is our go-to data source
> since we always have a constantly updated set, as we use them to
> automatically publish our finding aids.) We did not do an additional regex
> search for non-encoded URLs, but that's not a bad idea.  The result was
> over 15,000 links.  Each was recorded with bibid, container id (if any),
> and link text and title (if any). To check them, my colleague ran them
> through a Python script to get the response code, and if the response was
> 301/302, retrieve the redirect location and secondary response. This
> produced some interesting results, and resulted in a fair amount of
> remediation work to do.
>
> If this sounds on point, I can try to find and share the code we used.
>
> Kevin
>
> On Wed, Feb 10, 2021 at 9:34 AM Corey Schmidt 
> wrote:
>
> Nancy,
>
> I have access to our staging database, but not production. I'm not sure
> our sysadmins will allow me to play around in the prod database, unless
> they can assign me read only maybe? Pulling the file_uri values for
> file_version would be much more efficient. However, I'm not just looking to
> check digital object links, but also any links found within collection and
> archival object level notes, either copied straight into the text of the
> notes or linked using the  tag. I could probably query the database
> for that info too.
>
> Corey
> --
> *From:* archivesspace_users_group-boun...@lyralists.lyrasis.org <
> archivesspace_users_group-boun...@lyralists.lyrasis.org> on behalf of
> Kennedy, Nancy 
> *Sent:* Wednesday, February 10, 2021 9:18 AM
> *To:* Archivesspace Users Group <
> archivesspace_users_group@lyralists.lyrasis.org>
> *Subject:* Re: [Archivesspace_Users_Group] Checking for Broken URLs in
> Resources
>
> [EXTERNAL SENDER - PROCEED CAUTIOUSLY]
>
> Hi Corey –
>
> Do you have access to query the database, as a starting point, instead of
> EAD?  We were able to pull the file_uri values from the file_version table
> in the database.  Our sysadmin then checked the response codes for that
> list of URI, and we referred issues out to staff working on those
> collections.  Some corrections can be made directly by staff, or for long
> lists, you could include the digital_object id and post updates that way.
>
>
>
> Nancy
>
>
>
>
>
> *From:* archivesspace_users_group-boun...@lyralists.lyrasis.org <
> archivesspace_users_group-boun...@lyralists.lyrasis.org> *On Behalf Of *Corey
> Schmidt
> *Sent:* Wednesday, February 10, 2021 8:45 AM
> *To:* archivesspace_users_group@lyralists.lyrasis.org
> *Subject:* [Archivesspace_Users_Group] Checking for Broken URLs in
> Resources
>
>
>
> *External Email - Exercise Caution*
>
> Dear all,
>
>
> Hello, this is Corey Schmidt, 

Re: [Archivesspace_Users_Group] Duplicated fields in resources bug?

2021-02-10 Thread Kennedy, Nancy
Hi all,
We’ve just encountered this issue where dates/extents are repeating after 
saving a Resource.  User was editing Notes, and then the dates and extents 
copied 4 times each.  The user was able to manually remove the extra copies.

Does anyone know what causes this, or if there is a solution in 2.7.1?

Thanks,
Nancy



From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
 On Behalf Of Kevin W. 
Schlottmann
Sent: Friday, October 16, 2020 10:41 AM
To: Archivesspace Users Group 
Subject: Re: [Archivesspace_Users_Group] Duplicated fields in resources bug?

External Email - Exercise Caution
Hi Mark,

We are on 2.8.0, and haven't seen this error recur.

Kevin

On Fri, Oct 16, 2020 at 10:03 AM Custer, Mark 
mailto:mark.cus...@yale.edu>> wrote:
All,

Has anyone diagnosed this issue yet and/or determined if it's present in 2.8.x?

We recently upgraded to 2.7.1, and we've since been greeted by this bug at 
least twice (though I suspect it's visited unnoticed more frequently).  In each 
case, the following subrecord types are duplicated when the resource record is 
saved:  language, date, and extent.  Picture attached, which shows three 
duplicate pairs for the dates and three duplicates for the extent.

I haven't tried to ferret out additional affected records in our database just 
yet, so I'm curious if anyone has noticed this happening on other record types, 
like archival objects and digital objects, or if this has been isolated to 
resource records.

Mark




From: 
archivesspace_users_group-boun...@lyralists.lyrasis.org
 
mailto:archivesspace_users_group-boun...@lyralists.lyrasis.org>>
 on behalf of Kevin W. Schlottmann 
mailto:kws2...@columbia.edu>>
Sent: Wednesday, June 3, 2020 9:44 AM
To: Archivesspace Users Group 
mailto:archivesspace_users_group@lyralists.lyrasis.org>>
Subject: Re: [Archivesspace_Users_Group] Duplicated fields in resources bug?

Thanks Andrew, that could be a useful lead.

On Wed, Jun 3, 2020 at 3:57 AM Andrew Morrison 
mailto:andrew.morri...@bodleian.ox.ac.uk>> 
wrote:

Something else which are duplicated are ARKs, if you have them turned on:

https://archivesspace.atlassian.net/browse/ANW-1060

But they're generated for you, so could be an unrelated bug.

Andrew.


On 02/06/2020 17:48, Kevin W. Schlottmann wrote:
Thanks Dan.  I'm glad (I guess?) to hear that it's not unique to our instance.

On Mon, Jun 1, 2020 at 2:52 PM Daniel Michelson 
mailto:dmichel...@smith.edu>> wrote:
Hi Kevin,

I'm sure this isn't much help, but we've also encountered this bug on one 
resource record in 2.7.1.  We could not determine what might have caused it.

Dan

On Mon, Jun 1, 2020 at 2:27 PM Kevin W. Schlottmann 
mailto:kws2...@columbia.edu>> wrote:
Hi all,

We are chasing an odd bug where certain fields are being inadvertently 
duplicated by AS when some other edit action is taken on the resource record.  
The fields being duplicated are language notes, extent notes, and date notes at 
the resource-level.

We are unable to replicate the bug reliably, on any given record, or for a 
record on a Test server where we see the behavior on Production. We are running 
2.7.0, hosted by Lyrasis, with limited customizations.

Has anyone come across this or a similar bug?

Kevin

--
Kevin Schlottmann
Head of Archives Processing
Rare Book & Manuscript Library
Butler Library, Room 801
Columbia University
535 W. 114th St., New York, NY  10027
(212) 854-8483
___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


--
Dan Michelson
Project Manager Archivist
Smith College Special Collections


The Special Collections reading room is closed. As of March 18th, all Special 
Collections staff will be working remotely. Smith course support is our primary 
responsibility at this time.


Minimal reference services will be managed remotely by staff. Reference 
inquiries that require access to physical materials will be held in order 

Re: [Archivesspace_Users_Group] Checking for Broken URLs in Resources

2021-02-10 Thread Kennedy, Nancy
Hi Corey -
Do you have access to query the database, as a starting point, instead of EAD?  
We were able to pull the file_uri values from the file_version table in the 
database.  Our sysadmin then checked the response codes for that list of URI, 
and we referred issues out to staff working on those collections.  Some 
corrections can be made directly by staff, or for long lists, you could include 
the digital_object id and post updates that way.

Nancy


From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
 On Behalf Of Corey 
Schmidt
Sent: Wednesday, February 10, 2021 8:45 AM
To: archivesspace_users_group@lyralists.lyrasis.org
Subject: [Archivesspace_Users_Group] Checking for Broken URLs in Resources

External Email - Exercise Caution
Dear all,

Hello, this is Corey Schmidt, ArchivesSpace PM at the University of Georgia. I 
hope everyone is doing well and staying safe and healthy.

Would anyone know of any script, plugin, or tool to check for invalid URLs 
within resources? We are investigating how to grab URLs from exported EAD.xml 
files and check them to determine if they throw back any sort of error (404s 
mostly, but also any others). My thinking is to build a small app that will 
export EAD.xml files from ArchivesSpace, then sift through the raw xml using 
python's lxml package to catch any URLs using regex. After capturing the URL, 
it would then use the requests library to check the status code of the URL and 
if it returns an error, log that error in a .CSV output file to act as a 
"report" of all the broken links within that resource.

The problems with this method are: 1. Exporting 1000s of resources takes a lot 
of time and some processing power, as well as a moderate amount of local 
storage space. 2. Even checking the raw xml file takes a considerable amount of 
time. The app I'm working on takes overnight to export and check all the xml 
files. I was considering pinging the API for different parts of a resource, but 
I figured that would take as much time as just exporting an EAD.xml and would 
be even more complex to write. I've checked Awesome ArchivesSpace, this 
listserv, and a few script libraries from institutions, but haven't found 
exactly what I am looking for.

Any info or advice would be greatly appreciated! Thanks!

Sincerely,

Corey

Corey Schmidt
ArchivesSpace Project Manager
University of Georgia Special Collections Libraries
Email: corey.schm...@uga.edu
___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


Re: [Archivesspace_Users_Group] Checking for Broken URLs in Resources

2021-02-10 Thread Corey Schmidt
Kevin,

I'd be very interested to get your code, especially with 301 and 302 redirects. 
My initial runs have resulted in redirects stalling the status code response, 
preventing my program from moving forward. Getting the status code of the link 
being redirected to would be a big help.

I should also mention that any solution I make needs to be replicable to other 
faculty/staff in our library - so access to the database may not be consistent 
in the future. I'm thinking of a scenario where 5 years from now, some people 
may want to run this report again but I may not be working for the library 
anymore. If that's the case, working to export from ArchivesSpace might be the 
better long-term, no-fusses solution.

Corey

From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
 on behalf of Kevin W. 
Schlottmann 
Sent: Wednesday, February 10, 2021 10:27 AM
To: Archivesspace Users Group 
Subject: Re: [Archivesspace_Users_Group] Checking for Broken URLs in Resources

[EXTERNAL SENDER - PROCEED CAUTIOUSLY]

Hi Corey,

Earlier this year we did something similar. We started by extracting all 
xlink:href from the EAD corpus using xslt. (EAD is our go-to data source since 
we always have a constantly updated set, as we use them to automatically 
publish our finding aids.) We did not do an additional regex search for 
non-encoded URLs, but that's not a bad idea.  The result was over 15,000 links. 
 Each was recorded with bibid, container id (if any), and link text and title 
(if any). To check them, my colleague ran them through a Python script to get 
the response code, and if the response was 301/302, retrieve the redirect 
location and secondary response. This produced some interesting results, and 
resulted in a fair amount of remediation work to do.

If this sounds on point, I can try to find and share the code we used.

Kevin

On Wed, Feb 10, 2021 at 9:34 AM Corey Schmidt 
mailto:corey.schm...@uga.edu>> wrote:
Nancy,

I have access to our staging database, but not production. I'm not sure our 
sysadmins will allow me to play around in the prod database, unless they can 
assign me read only maybe? Pulling the file_uri values for file_version would 
be much more efficient. However, I'm not just looking to check digital object 
links, but also any links found within collection and archival object level 
notes, either copied straight into the text of the notes or linked using the 
 tag. I could probably query the database for that info too.

Corey

From: 
archivesspace_users_group-boun...@lyralists.lyrasis.org
 
mailto:archivesspace_users_group-boun...@lyralists.lyrasis.org>>
 on behalf of Kennedy, Nancy mailto:kenne...@si.edu>>
Sent: Wednesday, February 10, 2021 9:18 AM
To: Archivesspace Users Group 
mailto:archivesspace_users_group@lyralists.lyrasis.org>>
Subject: Re: [Archivesspace_Users_Group] Checking for Broken URLs in Resources

[EXTERNAL SENDER - PROCEED CAUTIOUSLY]


Hi Corey –

Do you have access to query the database, as a starting point, instead of EAD?  
We were able to pull the file_uri values from the file_version table in the 
database.  Our sysadmin then checked the response codes for that list of URI, 
and we referred issues out to staff working on those collections.  Some 
corrections can be made directly by staff, or for long lists, you could include 
the digital_object id and post updates that way.



Nancy





From: 
archivesspace_users_group-boun...@lyralists.lyrasis.org
 
mailto:archivesspace_users_group-boun...@lyralists.lyrasis.org>>
 On Behalf Of Corey Schmidt
Sent: Wednesday, February 10, 2021 8:45 AM
To: 
archivesspace_users_group@lyralists.lyrasis.org
Subject: [Archivesspace_Users_Group] Checking for Broken URLs in Resources



External Email - Exercise Caution

Dear all,

Hello, this is Corey Schmidt, ArchivesSpace PM at the University of Georgia. I 
hope everyone is doing well and staying safe and healthy.

Would anyone know of any script, plugin, or tool to check for invalid URLs 
within resources? We are investigating how to grab URLs from exported EAD.xml 
files and check them to determine if they throw back any sort of error (404s 
mostly, but also any others). My thinking is to build a small app that will 
export EAD.xml files from ArchivesSpace, then sift through the raw xml using 
python's lxml package to catch any URLs using regex. After capturing the URL, 
it would then use the requests library to check the status code of the URL and 
if it returns an error, log that error in a .CSV output file to act as a 
"report" of all the broken links within that resource.

The problems with this method are: 1. Exporting 1000s of resources takes a lot 
of time and some processing power, as well as a moderate amount of local 
storage 

Re: [Archivesspace_Users_Group] Checking for Broken URLs in Resources

2021-02-10 Thread Kevin W. Schlottmann
Hi Corey,

Earlier this year we did something similar. We started by extracting all
xlink:href from the EAD corpus using xslt. (EAD is our go-to data source
since we always have a constantly updated set, as we use them to
automatically publish our finding aids.) We did not do an additional regex
search for non-encoded URLs, but that's not a bad idea.  The result was
over 15,000 links.  Each was recorded with bibid, container id (if any),
and link text and title (if any). To check them, my colleague ran them
through a Python script to get the response code, and if the response was
301/302, retrieve the redirect location and secondary response. This
produced some interesting results, and resulted in a fair amount of
remediation work to do.

If this sounds on point, I can try to find and share the code we used.

Kevin

On Wed, Feb 10, 2021 at 9:34 AM Corey Schmidt  wrote:

> Nancy,
>
> I have access to our staging database, but not production. I'm not sure
> our sysadmins will allow me to play around in the prod database, unless
> they can assign me read only maybe? Pulling the file_uri values for
> file_version would be much more efficient. However, I'm not just looking to
> check digital object links, but also any links found within collection and
> archival object level notes, either copied straight into the text of the
> notes or linked using the  tag. I could probably query the database
> for that info too.
>
> Corey
> --
> *From:* archivesspace_users_group-boun...@lyralists.lyrasis.org <
> archivesspace_users_group-boun...@lyralists.lyrasis.org> on behalf of
> Kennedy, Nancy 
> *Sent:* Wednesday, February 10, 2021 9:18 AM
> *To:* Archivesspace Users Group <
> archivesspace_users_group@lyralists.lyrasis.org>
> *Subject:* Re: [Archivesspace_Users_Group] Checking for Broken URLs in
> Resources
>
> [EXTERNAL SENDER - PROCEED CAUTIOUSLY]
>
> Hi Corey –
>
> Do you have access to query the database, as a starting point, instead of
> EAD?  We were able to pull the file_uri values from the file_version table
> in the database.  Our sysadmin then checked the response codes for that
> list of URI, and we referred issues out to staff working on those
> collections.  Some corrections can be made directly by staff, or for long
> lists, you could include the digital_object id and post updates that way.
>
>
>
> Nancy
>
>
>
>
>
> *From:* archivesspace_users_group-boun...@lyralists.lyrasis.org <
> archivesspace_users_group-boun...@lyralists.lyrasis.org> *On Behalf Of *Corey
> Schmidt
> *Sent:* Wednesday, February 10, 2021 8:45 AM
> *To:* archivesspace_users_group@lyralists.lyrasis.org
> *Subject:* [Archivesspace_Users_Group] Checking for Broken URLs in
> Resources
>
>
>
> *External Email - Exercise Caution*
>
> Dear all,
>
>
> Hello, this is Corey Schmidt, ArchivesSpace PM at the University of
> Georgia. I hope everyone is doing well and staying safe and healthy.
>
> Would anyone know of any script, plugin, or tool to check for invalid URLs
> within resources? We are investigating how to grab URLs from exported
> EAD.xml files and check them to determine if they throw back any sort of
> error (404s mostly, but also any others). My thinking is to build a small
> app that will export EAD.xml files from ArchivesSpace, then sift through
> the raw xml using python's lxml package to catch any URLs using regex.
> After capturing the URL, it would then use the requests library to check
> the status code of the URL and if it returns an error, log that error in a
> .CSV output file to act as a "report" of all the broken links within that
> resource.
>
> The problems with this method are: 1. Exporting 1000s of resources takes a
> lot of time and some processing power, as well as a moderate amount of
> local storage space. 2. Even checking the raw xml file takes a considerable
> amount of time. The app I'm working on takes overnight to export and check
> all the xml files. I was considering pinging the API for different parts of
> a resource, but I figured that would take as much time as just exporting an
> EAD.xml and would be even more complex to write. I've checked Awesome
> ArchivesSpace, this listserv, and a few script libraries from institutions,
> but haven't found exactly what I am looking for.
>
> Any info or advice would be greatly appreciated! Thanks!
>
> Sincerely,
>
> Corey
>
>
>
> Corey Schmidt
>
> ArchivesSpace Project Manager
>
> University of Georgia Special Collections Libraries
>
> *Email:* corey.schm...@uga.edu
> ___
> Archivesspace_Users_Group mailing list
> Archivesspace_Users_Group@lyralists.lyrasis.org
> http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group
>


-- 
Kevin Schlottmann
Interim Director and Head of Archives Processing
Rare Book & Manuscript Library
Butler Library, Room 801
Columbia University
535 W. 114th St., New York, NY  10027
(212) 854-8483
___

Re: [Archivesspace_Users_Group] Checking for Broken URLs in Resources

2021-02-10 Thread Corey Schmidt
Nancy,

I have access to our staging database, but not production. I'm not sure our 
sysadmins will allow me to play around in the prod database, unless they can 
assign me read only maybe? Pulling the file_uri values for file_version would 
be much more efficient. However, I'm not just looking to check digital object 
links, but also any links found within collection and archival object level 
notes, either copied straight into the text of the notes or linked using the 
 tag. I could probably query the database for that info too.

Corey

From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
 on behalf of Kennedy, 
Nancy 
Sent: Wednesday, February 10, 2021 9:18 AM
To: Archivesspace Users Group 
Subject: Re: [Archivesspace_Users_Group] Checking for Broken URLs in Resources

[EXTERNAL SENDER - PROCEED CAUTIOUSLY]


Hi Corey –

Do you have access to query the database, as a starting point, instead of EAD?  
We were able to pull the file_uri values from the file_version table in the 
database.  Our sysadmin then checked the response codes for that list of URI, 
and we referred issues out to staff working on those collections.  Some 
corrections can be made directly by staff, or for long lists, you could include 
the digital_object id and post updates that way.



Nancy





From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
 On Behalf Of Corey 
Schmidt
Sent: Wednesday, February 10, 2021 8:45 AM
To: archivesspace_users_group@lyralists.lyrasis.org
Subject: [Archivesspace_Users_Group] Checking for Broken URLs in Resources



External Email - Exercise Caution

Dear all,

Hello, this is Corey Schmidt, ArchivesSpace PM at the University of Georgia. I 
hope everyone is doing well and staying safe and healthy.

Would anyone know of any script, plugin, or tool to check for invalid URLs 
within resources? We are investigating how to grab URLs from exported EAD.xml 
files and check them to determine if they throw back any sort of error (404s 
mostly, but also any others). My thinking is to build a small app that will 
export EAD.xml files from ArchivesSpace, then sift through the raw xml using 
python's lxml package to catch any URLs using regex. After capturing the URL, 
it would then use the requests library to check the status code of the URL and 
if it returns an error, log that error in a .CSV output file to act as a 
"report" of all the broken links within that resource.

The problems with this method are: 1. Exporting 1000s of resources takes a lot 
of time and some processing power, as well as a moderate amount of local 
storage space. 2. Even checking the raw xml file takes a considerable amount of 
time. The app I'm working on takes overnight to export and check all the xml 
files. I was considering pinging the API for different parts of a resource, but 
I figured that would take as much time as just exporting an EAD.xml and would 
be even more complex to write. I've checked Awesome ArchivesSpace, this 
listserv, and a few script libraries from institutions, but haven't found 
exactly what I am looking for.

Any info or advice would be greatly appreciated! Thanks!

Sincerely,

Corey



Corey Schmidt

ArchivesSpace Project Manager

University of Georgia Special Collections Libraries

Email: corey.schm...@uga.edu
___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


[Archivesspace_Users_Group] Checking for Broken URLs in Resources

2021-02-10 Thread Corey Schmidt
Dear all,

Hello, this is Corey Schmidt, ArchivesSpace PM at the University of Georgia. I 
hope everyone is doing well and staying safe and healthy.

Would anyone know of any script, plugin, or tool to check for invalid URLs 
within resources? We are investigating how to grab URLs from exported EAD.xml 
files and check them to determine if they throw back any sort of error (404s 
mostly, but also any others). My thinking is to build a small app that will 
export EAD.xml files from ArchivesSpace, then sift through the raw xml using 
python's lxml package to catch any URLs using regex. After capturing the URL, 
it would then use the requests library to check the status code of the URL and 
if it returns an error, log that error in a .CSV output file to act as a 
"report" of all the broken links within that resource.

The problems with this method are: 1. Exporting 1000s of resources takes a lot 
of time and some processing power, as well as a moderate amount of local 
storage space. 2. Even checking the raw xml file takes a considerable amount of 
time. The app I'm working on takes overnight to export and check all the xml 
files. I was considering pinging the API for different parts of a resource, but 
I figured that would take as much time as just exporting an EAD.xml and would 
be even more complex to write. I've checked Awesome ArchivesSpace, this 
listserv, and a few script libraries from institutions, but haven't found 
exactly what I am looking for.

Any info or advice would be greatly appreciated! Thanks!

Sincerely,

Corey

Corey Schmidt
ArchivesSpace Project Manager
University of Georgia Special Collections Libraries
Email: corey.schm...@uga.edu
___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


Re: [Archivesspace_Users_Group] System upgrade from v1.5.2 to v2.5.2 and finally to v2.8.0

2021-02-10 Thread Blake Carver
For this one I'd do a full reindex.
Shut down ArchivesSpace
Delete everything in /data/
Yes, every single thing in the /data/ directory, but not the directory, you 
need it, just empty.
Start ArchivesSpace
It'll take a while to get all indexed.

From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
 on behalf of Corey 
Schmidt 
Sent: Wednesday, February 10, 2021 8:27 AM
To: archivesspace_users_group@lyralists.lyrasis.org 

Subject: Re: [Archivesspace_Users_Group] System upgrade from v1.5.2 to v2.5.2 
and finally to v2.8.0

Jimmy,

After the upgrade, did you run a re-index? I know after we upgraded from 2.6 to 
2.8 we had data that was all in the wrong places in the Staff Interface, due to 
the Solr index being out of sync with the database.

Corey

Corey Schmidt
ArchivesSpace Project Manager
University of Georgia Special Collections Libraries
Email: corey.schm...@uga.edu

From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
 on behalf of Jimmy 
SUNG 
Sent: Wednesday, February 10, 2021 3:44 AM
To: archivesspace_users_group@lyralists.lyrasis.org 

Subject: [Archivesspace_Users_Group] System upgrade from v1.5.2 to v2.5.2 and 
finally to v2.8.0

[EXTERNAL SENDER - PROCEED CAUTIOUSLY]


Hi,



I encountered an issue after system upgrade. The information on  the “Dates” 
field of Accession has messed up with other data and the create_time is not 
correct. For below record, it should be created on 2015-01-21. Grateful if you 
can give some advice!



Before upgrade (version v.1.5.2), the “Dates” field: -

Created by ccsung

2015-01-21 15:33:35 +0800

Last Modified by ccsung

2015-06-05 13:28:34 +0800



After upgrade (version v2.8.0), the “Dates” field: -

{"lock_version"=>0, "begin"=>"2007-03-02", "end"=>"2009-06-05", 
"created_by"=>"ccsung", "last_modified_by"=>"ccsung", 
"create_time"=>"2015-06-05T05:28:34Z", "system_mtime"=>"2015-06-05T05:28:34Z", 
"user_mtime"=>"2015-06-05T05:28:34Z", "date_type"=>"inclusive", 
"label"=>"creation", "jsonmodel_type"=>"date"}



Best Regards,

Jimmy SUNG

Technology Support Services

The University of Hong Kong Libraries


___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


Re: [Archivesspace_Users_Group] System upgrade from v1.5.2 to v2.5.2 and finally to v2.8.0

2021-02-10 Thread Corey Schmidt
Jimmy,

After the upgrade, did you run a re-index? I know after we upgraded from 2.6 to 
2.8 we had data that was all in the wrong places in the Staff Interface, due to 
the Solr index being out of sync with the database.

Corey

Corey Schmidt
ArchivesSpace Project Manager
University of Georgia Special Collections Libraries
Email: corey.schm...@uga.edu

From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
 on behalf of Jimmy 
SUNG 
Sent: Wednesday, February 10, 2021 3:44 AM
To: archivesspace_users_group@lyralists.lyrasis.org 

Subject: [Archivesspace_Users_Group] System upgrade from v1.5.2 to v2.5.2 and 
finally to v2.8.0

[EXTERNAL SENDER - PROCEED CAUTIOUSLY]


Hi,



I encountered an issue after system upgrade. The information on  the “Dates” 
field of Accession has messed up with other data and the create_time is not 
correct. For below record, it should be created on 2015-01-21. Grateful if you 
can give some advice!



Before upgrade (version v.1.5.2), the “Dates” field: -

Created by ccsung

2015-01-21 15:33:35 +0800

Last Modified by ccsung

2015-06-05 13:28:34 +0800



After upgrade (version v2.8.0), the “Dates” field: -

{"lock_version"=>0, "begin"=>"2007-03-02", "end"=>"2009-06-05", 
"created_by"=>"ccsung", "last_modified_by"=>"ccsung", 
"create_time"=>"2015-06-05T05:28:34Z", "system_mtime"=>"2015-06-05T05:28:34Z", 
"user_mtime"=>"2015-06-05T05:28:34Z", "date_type"=>"inclusive", 
"label"=>"creation", "jsonmodel_type"=>"date"}



Best Regards,

Jimmy SUNG

Technology Support Services

The University of Hong Kong Libraries


___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group


[Archivesspace_Users_Group] System upgrade from v1.5.2 to v2.5.2 and finally to v2.8.0

2021-02-10 Thread Jimmy SUNG
Hi,

I encountered an issue after system upgrade. The information on  the "Dates" 
field of Accession has messed up with other data and the create_time is not 
correct. For below record, it should be created on 2015-01-21. Grateful if you 
can give some advice!

Before upgrade (version v.1.5.2), the "Dates" field: -
Created by ccsung
2015-01-21 15:33:35 +0800
Last Modified by ccsung
2015-06-05 13:28:34 +0800

After upgrade (version v2.8.0), the "Dates" field: -
{"lock_version"=>0, "begin"=>"2007-03-02", "end"=>"2009-06-05", 
"created_by"=>"ccsung", "last_modified_by"=>"ccsung", 
"create_time"=>"2015-06-05T05:28:34Z", "system_mtime"=>"2015-06-05T05:28:34Z", 
"user_mtime"=>"2015-06-05T05:28:34Z", "date_type"=>"inclusive", 
"label"=>"creation", "jsonmodel_type"=>"date"}

Best Regards,
Jimmy SUNG
Technology Support Services
The University of Hong Kong Libraries

___
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group