Hey there.  I am definitely not the most qualified person on this listserv to 
answer your question, but I can share what we’ve done in our instances and my 
experience with this, and what I’ve learned over the years.

First the number of indexer threads should be keyed the number of CPU cores you 
have on a given machine.  You list 7—is that how many cores you have?

The Java XMX is the Java heap size.  Your value seems quite high, unless you 
have that much RAM available for the application.  You can check on your server 
by running the command “free” or checking with your IT department.

Another key thing to note if you are running version 2.7.1: there is a bug in 
older versions of ASpace that means that the application does not respect the 
*solr* log level you have in your config file.   I had to find a workaround for 
this by changing the log level at the server level in Java settings for Solr.   
This of course isn’t an issue in recent releases because solr isn’t part of 
ASpace anymore anyway.   But the effect of this bug without altering server 
settings is that the logs are really noisy with solr stuff and this can impact 
performance.  If you think this may be an issue for you, reach out to me and 
I’ll send you a document for how to change the solr log level at the server 
level.

Hope things improve for you!

From: archivesspace_users_group-boun...@lyralists.lyrasis.org 
<archivesspace_users_group-boun...@lyralists.lyrasis.org> On Behalf Of Cowing 
[he], Jared
Sent: Tuesday, September 13, 2022 12:33 PM
To: archivesspace_users_group@lyralists.lyrasis.org
Subject: [External] [Archivesspace_Users_Group] Performance tuning

Hi all,

I'm currently working with our campus IT to identify the cause(s) of a 
degradation in performance that we've been seeing. I'm relatively new to our 
library and to ArchivesSpace generally, but I've been told it's been slowing 
bit by bit for a few years, and the problem has escalated just in the past few 
weeks. Neither full re-indexes nor our nightly re-starts seem to help.

I'm aware of this page on 
tuning<https://archivesspace.github.io/tech-docs/provisioning/tuning.html>, 
which has been quite helpful in addition to suggestions already posted to this 
list. We're hopeful that moving to external Solr with our next upgrade will 
also help (currently on 2.7.1), but are still trying other measures just in 
case it doesn't. While we look more into general best practices for tuning our 
hosting environment, I'd also like to check with all of you to see if there are 
common issues that are more specific to ArchivesSpace that we have overlooked 
and should focus our attention on.

Here are a few of our key settings. Our Java memory variables are below. I get 
the sense that they are higher than average, is that so?
ASPACE_JAVA_XMX="-Xmx16144m"
ASPACE_JAVA_XSS="-Xss8m"

Indexer settings from config.rb:
AppConfig[:indexer_records_per_thread] = 25
AppConfig[:indexer_thread_count] = 7
AppConfig[:indexer_solr_timeout_seconds] = 300


And copied from our innodb settings:
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=8G
symbolic-links=0
max_allowed_packet=128M
open_files_limit=9182

I appreciate any tips on what we ought to be looking for. I know it's hard to 
give advice from afar when each institution's situation is different, but 
thought it worth asking in case anything jumped out before we turn to technical 
support.

Thanks,
--

Jared Cowing | Systems Librarian | he/him
WILLIAMS COLLEGE LIBRARIES<https://library.williams.edu/>  | Williamstown, MA | 
(413)597-3061

Caution: This message originated from outside of the Tufts University 
organization. Please exercise caution when clicking links or opening 
attachments. When in doubt, email the TTS Service Desk at 
i...@tufts.edu<mailto:i...@tufts.edu> or call them directly at 617-627-3376.

_______________________________________________
Archivesspace_Users_Group mailing list
Archivesspace_Users_Group@lyralists.lyrasis.org
http://lyralists.lyrasis.org/mailman/listinfo/archivesspace_users_group

Reply via email to