That could be useful thanks Jason.  Cheers — James

> On Nov 16, 2016, at 1:10 PM, Jason Loeffler <j...@minorscience.com> wrote:
> 
> 
> Thanks, James. Just wondering if there's a load test battery for a large 
> dataset. I can generate a large dataset if you think it is useful for your 
> work. 
> 
> 
> 
> 
> On Tue, Nov 15, 2016 at 6:56 PM -0500, "James Bullen" <ja...@hudmol.com 
> <mailto:ja...@hudmol.com>> wrote:
> 
> 
> Hi Jason - I’m not aware of a script like this.  Cheers — James
> 
> 
>> On Nov 16, 2016, at 10:28 AM, Jason Loeffler <j...@minorscience.com 
>> <mailto:j...@minorscience.com>> wrote:
>> 
>> Thanks so much, James. Can you tell me if there's a script available for 
>> generating an arbitrary number of test records? Something similar to 
>> Drupal's devel generate <https://api.drupal.org/api/devel/functions/8.x-1.x> 
>> hooks?
>> 
>> Jason Loeffler
>> Technology Consultant | The American Academy in Rome
>> Minor Science | Application Development & Metadata Strategy
>> Brooklyn, New York
>> ja...@minorscience.com <mailto:ja...@minorscience.com>
>> (347) 405-0826
>> minorscience (Skype)
>> 
>> 
>> 
>> On Tue, Nov 15, 2016 at 6:17 PM, James Bullen <ja...@hudmol.com 
>> <mailto:ja...@hudmol.com>> wrote:
>> 
>> Hi all,
>> 
>> As part of our new role as ArchivesSpace development partner, we will be 
>> addressing issues like this as a priority.
>> 
>> It is still early days and we’re working with the folks at the ArchivesSpace 
>> program on a release schedule, so we can’t make any definitive statements 
>> yet, but please know that we are aware of this issue and will address it as 
>> soon as we are able.
>> 
>> 
>> Cheers,
>> James
>> 
>> 
>> —
>> James Bullen
>> Hudson Molonglo
>> 
>> 
>> 
>>> On Nov 16, 2016, at 7:46 AM, Joshua D. Shaw <joshua.d.s...@dartmouth.edu 
>>> <mailto:joshua.d.s...@dartmouth.edu>> wrote:
>>> 
>>> Hi all –
>>>  
>>> We at Dartmouth have experienced similar issues. We have some large 
>>> resources as well (one has 60K+ objects in the tree) and anything that 
>>> involves a save or rearrangement (moving a file around, etc) can take a 
>>> *lot* of time (many minutes) and may cause an error – typically of the 
>>> “another user is modifying this record” type.
>>>  
>>> If we have to do any modifications to a resource of that size, we a) budget 
>>> a lot of time and b) do things in small increments – ie don’t move more 
>>> than a couple of files around at a time. It’s not a great solution, but it 
>>> does minimize some of the headache.
>>>  
>>> I *think* (but haven’t had the time to really dig into this) that one 
>>> reason the error comes about is because the indexer steps on/collides with 
>>> the process that the save/arrangement kicked off. We are still running 1.3 
>>> and hope that some of our issues will be mitigated when we move to 1.5.1, 
>>> though we know that not all of them have been resolved yet.
>>>  
>>> One other data point is that we’ve got a plugin that runs as a background 
>>> job doing a bunch of importing. This background job touches some of the 
>>> larger resources, but does *not* cause the errors and long save times, 
>>> which leads me to believe that a lot of the problem is in the frontend – 
>>> perhaps with the way the tree is populated - as Jason pointed out.
>>>  
>>> Best,
>>> Joshua
>>>  
>>> From: <archivesspace_users_group-boun...@lyralists.lyrasis.org 
>>> <mailto:archivesspace_users_group-boun...@lyralists.lyrasis.org>> on behalf 
>>> of Jason Loeffler <j...@minorscience.com <mailto:j...@minorscience.com>>
>>> Reply-To: Archivesspace Users Group 
>>> <archivesspace_users_group@lyralists.lyrasis.org 
>>> <mailto:archivesspace_users_group@lyralists.lyrasis.org>>
>>> Date: Tuesday, November 15, 2016 at 3:25 PM
>>> To: Archivesspace Users Group 
>>> <archivesspace_users_group@lyralists.lyrasis.org 
>>> <mailto:archivesspace_users_group@lyralists.lyrasis.org>>
>>> Cc: "archivessp...@googlegroups.com 
>>> <mailto:archivessp...@googlegroups.com>" <archivessp...@googlegroups.com 
>>> <mailto:archivessp...@googlegroups.com>>
>>> Subject: Re: [Archivesspace_Users_Group] Problems working with archival 
>>> object with large number of direct children
>>>  
>>> Hi Sally,
>>>  
>>> Definitely, yes. We have many resources with 5,000 or more archival object 
>>> records. We've deployed on some pretty decent Amazon EC2 boxes (16GB 
>>> memory, burstable CPU, etc.) with negligible improvement. I have a feeling 
>>> that this is not a resource allocation issue. Looking at the web inspector, 
>>> most of the time is spent negotiating jstree <http://jstree.com/> and/or 
>>> loading all JSON objects associated with a resource into the browser. Maybe 
>>> an ASpace dev can weigh in.
>>> 
>>> 
>>> From the sysadmin side, Maureen Callahan at Yale commissioned Percona to 
>>> evaluate ArchivesSpace and MySQL performance. I've attached the report. Let 
>>> me know if you need any help interpreting the report.
>>>  
>>> At some point, and quite apart from this thread, I hope we can collectively 
>>> revisit the staff interface architecture and recommend improvements. 
>>>  
>>> JL
>>>  
>>> On Tue, Nov 15, 2016 at 2:37 PM, Sally Vermaaten <sally.vermaa...@nyu.edu 
>>> <mailto:sally.vermaa...@nyu.edu>> wrote:
>>>> Hi everyone,
>>>>  
>>>> We're running into an issue with a large resource record in ArchivesSpace 
>>>> and wonder if anyone has experienced a similar issue. In one resource 
>>>> record, we have a series/archival object with around 19,000 direct 
>>>> children/archival objects. We've found that:  
>>>> ·         it takes several minutes to open the series in the 'tree' 
>>>> navigation view and then, once opened scrolling through series is very 
>>>> slow / laggy
>>>> ·         it takes a couple of minutes to open any archival object in the 
>>>> series in edit mode and 
>>>> ·         it takes a couple of minutes to save changes to any archival 
>>>> object within the series
>>>> Does anyone else have a similarly large archival object in a resource 
>>>> record? If so, have you observed the same long load/save time when editing 
>>>> the component records? 
>>>>  
>>>> The slow load time does not seem to be affected by memory allocation; 
>>>> we've tried increasing the speed / size of the server and it seemed to 
>>>> have no effect. We'd definitely appreciate any other suggestions for how 
>>>> we might fix or work around the problem.
>>>>  
>>>> We also wonder if this performance issue is essentially caused by the 
>>>> queries being run to generate the UI view - i.e. perhaps in generating the 
>>>> resource 'tree' view, all data for the whole series (all 19k archival 
>>>> objects) is being retrieved and stored in memory? If so, we wondered if it 
>>>> would be possible and would make sense to change the queries running 
>>>> during tree generation, etc. to only retrieve some batches at a time, lazy 
>>>> loading style? 
>>>>  
>>>> Thanks,
>>>> Weatherly and Sally
>>>>  
>>>> -- 
>>>> Sally Vermaaten
>>>> Project Manager, Archival Systems
>>>> New York University Libraries
>>>> 1-212-992-6259 <tel:1-212-992-6259>
>>>> 
>>>> _______________________________________________
>>>> Archivesspace_Users_Group mailing list
>>>> Archivesspace_Users_Group@lyralists.lyrasis.org 
>>>> <mailto:Archivesspace_Users_Group@lyralists.lyrasis.org>
>>>> http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group 
>>>> <http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group>
>>>  
>>> wbr>582b7444314351074817778! _______________________________________________
>>> Archivesspace_Users_Group mailing list
>>> Archivesspace_Users_Group@lyralists.lyrasis.org 
>>> <mailto:Archivesspace_Users_Group@lyralists.lyrasis.org>
>>> http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group 
>>> <http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group>
>>> 
>>> 
>>> wbr class="">582b7444314351074817778!
>> 
>> 
>> _______________________________________________
>> Archivesspace_Users_Group mailing list
>> Archivesspace_Users_Group@lyralists.lyrasis.org 
>> <mailto:Archivesspace_Users_Group@lyralists.lyrasis.org>
>> http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group 
>> <http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group>
>> 
>> 
>> !DSPAM:582b9a4b138833087816299! 
>> _______________________________________________
>> Archivesspace_Users_Group mailing list
>> Archivesspace_Users_Group@lyralists.lyrasis.org 
>> <mailto:Archivesspace_Users_Group@lyralists.lyrasis.org>
>> http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group 
>> <http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group>
>> 
>> 
>> !DSPAM:582b9a4b138833087816299!
> 
> !DSPAM:582bc03b277649809715286!
> _______________________________________________
> Archivesspace_Users_Group mailing list
> Archivesspace_Users_Group@lyralists.lyrasis.org
> http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group
> 
> 
> !DSPAM:582bc03b277649809715286!

_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group

Reply via email to