Hi Manuela, Are you still using the old, deprecated Database Browse tables (i.e. the "bi_*" tables) and Lucene search index? I notice you mentioned an error about an old "bi_1_dmap" table (which is the old Database browse system), and that you are running into memory problems with "index-lucene-update" (which is the old Lucene search system).
As of DSpace 4, we've changed the default search/browse to use Solr/Discovery: https://wiki.duraspace.org/display/DSDOC5x/Discovery In fact, as of DSpace 6, the only option is to use Solr/Discovery, so I'd recommend switching to it soon anyways (it also may provide better performance overall). If you are trying to use Solr/Discovery, then you should be using it's "index-discovery" script: https://wiki.duraspace.org/display/DSDOC5x/Discovery#Discovery-DiscoverySolrIndexMaintenance If Solr/Discovery is being used, you can also completely *delete* the old "bi_*" tables, as they are not needed: https://wiki.duraspace.org/display/DSDOC5x/Discovery#Discovery-RemovingLegacyBrowseTables(bi_*)fromyourDatabase Solr/Discovery should be enabled by default now (as of DSpace 4). But, if somehow you *don't* have it enabled, you can check the old 3.x instructions for how to enable Discovery in either the XMLUI or JSPUI: https://wiki.duraspace.org/display/DSDOC3x/Discovery#Discovery-EnablingDiscovery - Tim On Wed, Oct 31, 2018 at 4:17 PM Manuela Ferreira <[email protected]> wrote: > Thank you for your response. > > We have 14GB of memory and 12 cores, whit JAVA_OPTS > - JAVA_OPTS="-Xmx8G -Xms6G -XX:MaxMetaspaceSize=1G -server > -Dfile_encoding=UTF-8" > > We increase memory to 18GB, and change JAVA_OPTS > - JAVA_OPTS="-Xmx10G -Xms6G -XX:MaxMetaspaceSize=1G -server > -Dfile_encoding=UTF-8" > > OpenFiles = 150000, and we increase nprocs = 62754. > > Upgrade DSpace will be usefull only if we upgrade PosgreSQL from 9.5 to > 10, and this will take time, so if I can, I will avoid it. > > Looking for ERROR at dspace.log: > org.postgresql.util.PSQLException: ERRO: relação "bi_1_dmap" não existe > Position: 93 > at > org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2103) > at > org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1836) > at > org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) > > And generating index-lucene-update > > index-lucene-update > > Started: 1540965805844 > > Ended: 1540967115704 > > Elapsed time: 1309 secs (1309860 msecs) > > Exception: GC overhead limit exceeded > > java.lang.OutOfMemoryError: GC overhead limit exceeded > > at java.lang.ref.Finalizer.register(Finalizer.java:87) > > at java.lang.Object.<init>(Object.java:37) > > at > org.dspace.storage.rdbms.TableRowIterator.<init>(TableRowIterator.java:86) > > at > org.dspace.storage.rdbms.TableRowIterator.<init>(TableRowIterator.java:81) > > at > org.dspace.storage.rdbms.TableRowIterator.<init>(TableRowIterator.java:67) > > at > org.dspace.storage.rdbms.DatabaseManager.query(DatabaseManager.java:280) > > at > org.dspace.browse.BrowseCreateDAOPostgres.getDistinctID(BrowseCreateDAOPostgres.java:539) > > at > org.dspace.browse.IndexBrowse.indexItem(IndexBrowse.java:478) > > at > org.dspace.browse.IndexBrowse.createIndex(IndexBrowse.java:1138) > > at > org.dspace.browse.IndexBrowse.main(IndexBrowse.java:682) > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:498) > > at > org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:226) > > at > org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:78) > > > We tried theWorkaround cited > https://wiki.duraspace.org/display/DSPACE/Idle+In+Transaction+Problem. > > Nothing is working, the "idle in transaction" increase until 260 and > DSpace stop to respond. > > Thank you, we are still trying. > > Manuela Klanovicz Ferreira > > On Wed, Oct 31, 2018 at 1:49 PM Seth Robbins <[email protected]> wrote: > >> Hi all, >> We've run into similar issues in the past. This only applies to linux, >> but the biggest help has been to increase the ulimit for open files and >> processes per user. The defaults for these are not high enough for a Dspace >> production server (although it might be fine if your running a clustered, >> load-balanced instance). Ours are set to 16384 open files and 62754 user >> processes, and that seems to have eliminated the problem. >> >> It's also important to make sure your jvm is tuned so that the -Xmx >> parameter (maximum allowed memory usage) is high enough. >> Thanks, >> Seth Robbins >> >> On Wed, Oct 31, 2018 at 11:28 AM Tim Donohue <[email protected]> >> wrote: >> >>> Hi Manuela, >>> >>> Have you checked for any errors in your DSpace logs from over the >>> weekend? It's possible that the thumbnail generation process could be >>> running out of memory (or similar) and causing general instability issues. >>> So, it'd be worth looking at the logs during the time of the thumbnail >>> generation process to see if it's hitting major errors. (As a sidenote, >>> generating 20,000 thumbnails at once is a *very large number* of >>> thumbnails. So, it does make me wonder if the memory settings on your >>> server are properly optimized.) >>> >>> I'd also recommend considering upgrading to DSpace 5.10 (latest 5.x >>> version), as in 5.9 we upgraded the PostgreSQL JDBC database driver. >>> DSpace 5.8 uses a very old version of the PostgreSQL database driver, and >>> it's possible you are hitting a problem/bug in how that driver is >>> communicating with your PostgreSQL database. Here's more information on >>> that JDBC driver update: https://jira.duraspace.org/browse/DS-3854 >>> >>> Please respond on this list with any extra information you may find. >>> >>> - Tim >>> >>> >>> On Wed, Oct 31, 2018 at 1:53 AM Manuela Ferreira <[email protected]> >>> wrote: >>> >>>> Hello! >>>> >>>> Since this Monday, our repository, that uses DSpace 5.8, has been very >>>> unstable. Suddenly it stops to respond http requests, freezes the dspace >>>> and coccon log, and accumulate about 250 "idle in transaction" on >>>> postgresql. The most of this "idle in transaction" are like this (see >>>> printscreen attached): >>>> SELECT * FROM MetadataValue WHERE resource_id= $1 and resource_type_id >>>> = $2 ORDER BY metadata_field_id, place >>>> >>>> In dspace.cfg our db configuration are: >>>> # Maximum number of DB connections in pool >>>> # we have tried 250, but it is still unstable >>>> db.maxconnections = 150 >>>> >>>> # Maximum time to wait before giving up if all connections in pool are >>>> busy (milliseconds) >>>> db.maxwait = 10000 >>>> >>>> # Maximum number of idle connections in pool (-1 = unlimited) >>>> #we tried 30, but it is still unstable >>>> db.maxidle = 50 >>>> >>>> # Determine if prepared statement should be cached. (default is true) >>>> db.statementpool = true >>>> >>>> # Specify a name for the connection pool (useful if you have multiple >>>> applications sharing Tomcat's dbcp) >>>> # If not specified, defaults to 'dspacepool' >>>> db.poolname = dspacepool >>>> >>>> >>>> In postgresql.conf, for Postgresql 9.5 >>>> max_connections = 1000 >>>> >>>> >>>> Since Monday, we restart Tomcat, the situation normalize for 10 minutes >>>> in average, and then, the problem returns. During the night, probably due >>>> to small number of http requests, the repository remains stable for more >>>> time. >>>> >>>> The only thing that we have doing in the last week is generate about of >>>> 20.000 pdf.jpg every morning, until get all 190.000 pdf.jpg generated. >>>> Before that, we generate only jpg.jpg thumbnails. This Monday morning we >>>> almost finish to generate all the pdf.jpg, but at afternoon the DSpace had >>>> been unstable. Can the pdf.jpg generated in last days be the cause of >>>> instability? How can I delete this pdf.jpg? >>>> >>>> We increase the level of dspace and postgresql log, attached. >>>> >>>> Please, help. Our repository was almost offline since Monday and we >>>> can't find why. >>>> >>>> Manuela Klanovicz Ferreira >>>> dspace_log_aux.txt >>>> <https://drive.google.com/file/d/1JREWDTcI1VnK9M5A2weifqFoCXP4mphT/view?usp=drive_web> >>>> postgresql_log_aux.txt >>>> <https://drive.google.com/file/d/13CVjMUw8o3S57m7NxH146CEQSr54sLAa/view?usp=drive_web> >>>> >>>> >>>> >>>> -- >>>> All messages to this mailing list should adhere to the DuraSpace Code >>>> of Conduct: https://duraspace.org/about/policies/code-of-conduct/ >>>> --- >>>> You received this message because you are subscribed to the Google >>>> Groups "DSpace Technical Support" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to [email protected]. >>>> To post to this group, send email to [email protected]. >>>> Visit this group at https://groups.google.com/group/dspace-tech. >>>> For more options, visit https://groups.google.com/d/optout. >>>> >>> -- >>> Tim Donohue >>> Technical Lead for DSpace & DSpaceDirect >>> DuraSpace.org | DSpace.org | DSpaceDirect.org >>> >>> -- >>> All messages to this mailing list should adhere to the DuraSpace Code of >>> Conduct: https://duraspace.org/about/policies/code-of-conduct/ >>> --- >>> You received this message because you are subscribed to the Google >>> Groups "DSpace Technical Support" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected]. >>> To post to this group, send email to [email protected]. >>> Visit this group at https://groups.google.com/group/dspace-tech. >>> For more options, visit https://groups.google.com/d/optout. >>> >> -- > All messages to this mailing list should adhere to the DuraSpace Code of > Conduct: https://duraspace.org/about/policies/code-of-conduct/ > --- > You received this message because you are subscribed to the Google Groups > "DSpace Technical Support" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > Visit this group at https://groups.google.com/group/dspace-tech. > For more options, visit https://groups.google.com/d/optout. > -- Tim Donohue Technical Lead for DSpace & DSpaceDirect DuraSpace.org | DSpace.org | DSpaceDirect.org -- All messages to this mailing list should adhere to the DuraSpace Code of Conduct: https://duraspace.org/about/policies/code-of-conduct/ --- You received this message because you are subscribed to the Google Groups "DSpace Technical Support" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/dspace-tech. For more options, visit https://groups.google.com/d/optout.
