Re: [dspace-tech] Re: IOException in yearly Solr sharding

2018-01-10 Thread Alan Orth
Hi, Christian. I just tried again with 4096m heap size and the error is the
same. I think the problem is indeed related to the Solr optimistic
concurrency version conflicts that are addressed in DSpace 5.7 and DSpace
6.1.

- 5.7: https://wiki.duraspace.org/display/DSPACE/DSpace+Release+5.7+Status
- 6.1: https://wiki.duraspace.org/display/DSPACE/DSpace+Release+6.1+Status

Thanks,

On Thu, Jan 11, 2018 at 9:13 AM Christian Scheible <
christian.schei...@uni-konstanz.de> wrote:

> Hi Alan,
>
> have you tried increasing the Java Heap Space?
> On my local development machine (DSpace 6.2) the command did not run when
> Heap Space was only 1 GB but did work with 4GB.
>
> Like this:
> sudo -u tomcat7 JAVA_OPTS="-Xmx4024M -Xms1024M -Dfile.encoding=UTF-8"
> /opt/dspace-kops/bin/dspace stats-util -s
>
> Regards
>
> Christian
>
>
> Am 11.01.2018 um 08:04 schrieb Alan Orth:
>
> @Mark, I looked in DSpace's solr.log and see that the new Solr core is
> created, then it does some stuff and eventually closes it without an error.
> Neither Tomcat's catalina.out nor localhost.log have any errors around the
> time I attempted to shard on my local development machine. There might be a
> hint here in Tomcat's localhost_access_log, though:
>
> 127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET
> /solr/statistics/select?q=type%3A2+AND+id%3A1=javabin=2
> HTTP/1.1" 200 107
> 127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET
> /solr/statistics/select?q=*%3A*=0=true=time=NOW%2FYEAR-18YEARS=NOW%2FYEAR%2B0YEARS=%2B1YEAR=1=javabin=2
> HTTP/1.1" 200 447
> 127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET
> /solr/admin/cores?action=STATUS=statistics-2016=true=javabin=2
> HTTP/1.1" 200 76
> 127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET
> /solr/admin/cores?action=CREATE=statistics-2016=statistics=%2FUsers%2Faorth%2Fdspace%2Fsolr%2Fstatistics-2016%2Fdata=javabin=2
> HTTP/1.1" 200 63
> 127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET
> /solr/statistics/select?csv.mv.separator=%7C=*%3A*=time%3A%28%5B2016%5C-01%5C-01T00%5C%3A00%5C%3A00Z+TO+2017%5C-01%5C-01T00%5C%3A00%5C%3A00Z%5D+NOT+2017%5C-01%5C-01T00%5C%3A00%5C%3A00Z%29=1=csv
> HTTP/1.1" 200 2137630
> 127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET
> /solr/statistics/admin/luke?show=schema=javabin=2 HTTP/1.1" 200
> 16253
> 127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "POST
> /solr//statistics-2016/update/csv?commit=true=false=true=true=%7C=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22_communities_map.split=true_communities_map.separator=%7C_communities_map.encapsulator=%22_query_search.split=true_query_search.separator=%7C_query_search.encapsulator=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22_query_search.split=true_query_search.separator=%7C_query_search.encapsulator=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22_group_map.split=true_group_map.separator=%7C_group_map.encapsulator=%22=true=%7C=%22=true=%7C=%22_name.split=true_name.separator=%7C_name.encapsulator=%22_communities_name.split=true_communities_name.separator=%7C_communities_name.encapsulator=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22_query_search.split=true_query_search.separator=%7C_query_search.encapsulator=%22_communities_id.split=true_communities_id.separator=%7C_communities_id.encapsulator=%22=true=%7C=%22_id.split=true_id.separator=%7C_id.encapsulator=%22=true=%7C=%22_simplequery_search.split=true_simplequery_search.separator=%7C_simplequery_search.encapsulator=%22_map.split=true_map.separator=%7C_map.encapsulator=%22=true=%7C=%22_group_id.split=true_group_id.separator=%7C_group_id.encapsulator=%22_group_name.split=true_group_name.separator=%7C_group_name.encapsulator=%22=javabin=2
> HTTP/1.1" 409 156
>
> A new core is created, then DSpace GETs a CSV from Solr, tries to POST it
> to the new core and is greeted with an HTTP 409 error. I just Googled for
> "HTTP 409 solr" and found some mentions of optimistic concurrency and
> version conflicts. Interesting! This indeed sounds a lot like what I've
> read in some Jira issues. Could this be the problem fixed in DSpace 5.7,
> Terry?
>
> Our Solr statistics core has something like 80 million documents so I'm
> really hoping to be able to shard it eventually!
>
> Thank you,
>
> On Wed, Jan 10, 2018 at 7:04 PM Terry Brady 
> wrote:
>
>> Alan,
>>
>> There were some bug fixes to the Solr Sharding process in DSpace 5.7.
>> See
>> https://wiki.duraspace.org/display/~terrywbrady/Statistics+Import+Export+Issues
>> 
>> for details.
>>
>> I am running DSpace 5.8 and I was able to shard successfully.
>> https://wiki.duraspace.org/display/DSDOC5x/SOLR+Statistics+Maintenance#SOLRStatisticsMaintenance-SolrShardingByYear
>>
>> Terry
>>
>> On Wed, Jan 10, 2018 at 6:07 AM, Mark H. Wood 
>> wrote:
>>
>>> Does the server log anything interesting?  It seems to be dropping the
>>> connection.  I suspect a timeout of some sort, on the server side.
>>> --
>>> 

Re: [dspace-tech] Re: Notes on PostgreSQL connection pooling with a Tomcat JNDI resource

2018-01-10 Thread Alan Orth
To continue the discussion on a slightly related note: I've just finished
dealing with the fallout caused by some new bot — the only fingerprint of
which is its unique-but-normal-looking user agent — hitting our XMLUI with
450,000 requests from six different IPs over just a few hours. This
generated a ridiculous amount of load on the server, including 160
PostgreSQL connections and 52,000 Tomcat sessions before I was able to
mitigate it. Surprisingly, since I had increased out pool size to 300 after
my last message, we never got pool timeout or database connection errors in
dspace.log, but the site was very unresponsive — and this is on a beefy
server with SSDs, plenty of RAM, large PostgreSQL buffer cache, etc! I
ended up having to rate limit this user agent in our frontend nginx web
server using the limit_req_zone module[0].

So a bit of a mixed success and frustration here. No amount of pool
tweaking will fix this type of issue, because there's always another
bigger, stupider bot that comes along eventually and doesn't match the
"bot" user agent. I will definitely look into implementing separate pools
as Tom had suggested, though, to limit the damage caused by high load to
certain DSpace web applications. Keep sharing your experiences! This is
very valuable and interesting to me.

[0]
https://github.com/ilri/rmg-ansible-public/commit/368faaa99028c8e0c8a99de3f6c253a228d5f63b

Cheers!

On Thu, Jan 4, 2018 at 7:31 AM Alan Orth  wrote:

> That's a cool idea to use a separate pool for each web application, Tom!
> I'd much rather have my OAI fail to establish a database connection than my
> XMLUI. ;)
>
> Since I wrote the original mailing list message two weeks ago I've had
> DSpace fail to establish a database connection a few thousand times and
> I've increased my pool's max active from 50 to 75 and then 125 — our site
> gets about four million hits per month (from looking at nginx logs), so I'm
> still trying to find the "sweet spot" for the pool settings. Anything's
> better than setting the pool in dspace.cfg, though.
>
> I wish other people would share their pool settings and experiences.
>
> On Wed, Jan 3, 2018 at 2:40 PM Hardy Pottinger 
> wrote:
>
>> Hi, please do create this wiki page, I'd love to read it. Thanks!
>>
>> --Hardy
>>
>> On Wed, Jan 3, 2018 at 4:10 PM, Tom Desair  wrote:
>>
>>> I just wanted to add a small note that having 1 single DB pool for all
>>> Tomcat webapps can (and has) lead to problems. Your current pool size is
>>> 50. This means that if you have (malicious) crawlers hitting your OAI
>>> endpoint, this can deplete the available database connections available for
>>> the web UI (XMLUI or JSPUI). The other way around can also happen.
>>>
>>> But using JNDI DB pools also give you more fine-grained control over the
>>> connection distribution over the different web apps. For example, a default
>>> PostgreSQL installation comes with a max connection limit of 100. This
>>> means you can safely use around 70 connections (from experience). You can
>>> then divided these connections with individual JNDI pools like this:
>>>
>>>- OAI: 15 connections
>>>- REST: 15 connections
>>>- WEB UI: 40 connections
>>>
>>>
>>> Let me know if you've created a JNDI DB pool wiki page. I'll then try to
>>> add some useful information on JDBC interceptors (
>>> https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#Configuring_JDBC_interceptors
>>> ).
>>>
>>>
>>> [image: logo] Tom Desair
>>> 250-B Suite 3A, Lucius Gordon Drive, West Henrietta, NY 14586
>>> 
>>> Gaston Geenslaan 14, Leuven 3001, Belgium
>>> 
>>> www.atmire.com
>>> 
>>>
>>> 2018-01-03 22:36 GMT+01:00 Tim Donohue :
>>>
 Hi Alan & Mark,

 These notes look like the start to some enhanced documentation around
 setting up DSpace + Tomcat JNDI (hint, hint).

 I'm wondering (out loud) if we should take these concepts/ideas and
 turn them into official documentation in the "Installing DSpace" section
 (maybe somewhere under "Advanced Installation"?):
 https://wiki.duraspace.org/display/DSDOC6x/Installing+DSpace

 Thanks though for sharing the notes and Q here. I think this will be
 very helpful for others who wish to go this route.

 - Tim


 On Wed, Jan 3, 2018 at 3:17 PM Mark H. Wood 
 wrote:

> Thanks for posting these notes.  I'm sure they will be helpful.
> You've shown some tools that I didn't know about.
>
> A pool instantiated by DSpace is probably effectively invisible to
> other webapp.s even in the same JVM.  The Servlet spec. tries very hard to
> create the 

Re: [dspace-tech] Re: IOException in yearly Solr sharding

2018-01-10 Thread Christian Scheible

Hi Alan,

have you tried increasing the Java Heap Space?
On my local development machine (DSpace 6.2) the command did not run 
when Heap Space was only 1 GB but did work with 4GB.


Like this:
sudo -u tomcat7 JAVA_OPTS="-Xmx4024M -Xms1024M -Dfile.encoding=UTF-8" 
/opt/dspace-kops/bin/dspace stats-util -s


Regards
Christian

Am 11.01.2018 um 08:04 schrieb Alan Orth:
@Mark, I looked in DSpace's solr.log and see that the new Solr core is 
created, then it does some stuff and eventually closes it without an 
error. Neither Tomcat's catalina.out nor localhost.log have any errors 
around the time I attempted to shard on my local development machine. 
There might be a hint here in Tomcat's localhost_access_log, though:


127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET 
/solr/statistics/select?q=type%3A2+AND+id%3A1=javabin=2 
HTTP/1.1" 200 107
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET 
/solr/statistics/select?q=*%3A*=0=true=time=NOW%2FYEAR-18YEARS=NOW%2FYEAR%2B0YEARS=%2B1YEAR=1=javabin=2 
HTTP/1.1" 200 447
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET 
/solr/admin/cores?action=STATUS=statistics-2016=true=javabin=2 
HTTP/1.1" 200 76
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET 
/solr/admin/cores?action=CREATE=statistics-2016=statistics=%2FUsers%2Faorth%2Fdspace%2Fsolr%2Fstatistics-2016%2Fdata=javabin=2 
HTTP/1.1" 200 63
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET 
/solr/statistics/select?csv.mv.separator=%7C=*%3A*=time%3A%28%5B2016%5C-01%5C-01T00%5C%3A00%5C%3A00Z+TO+2017%5C-01%5C-01T00%5C%3A00%5C%3A00Z%5D+NOT+2017%5C-01%5C-01T00%5C%3A00%5C%3A00Z%29=1=csv 
HTTP/1.1" 200 2137630
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET 
/solr/statistics/admin/luke?show=schema=javabin=2 HTTP/1.1" 
200 16253
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "POST 
/solr//statistics-2016/update/csv?commit=true=false=true=true=%7C=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22_communities_map.split=true_communities_map.separator=%7C_communities_map.encapsulator=%22_query_search.split=true_query_search.separator=%7C_query_search.encapsulator=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22_query_search.split=true_query_search.separator=%7C_query_search.encapsulator=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22_group_map.split=true_group_map.separator=%7C_group_map.encapsulator=%22=true=%7C=%22=true=%7C=%22_name.split=true_name.separator=%7C_name.encapsulator=%22_communities_name.split=true_communities_name.separator=%7C_communities_name.encapsulator=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22_query_search.split=true_query_search.separator=%7C_query_search.encapsulator=%22_communities_id.split=true_communities_id.separator=%7C_communities_id.encapsulator=%22=true=%7C=%22_id.split=true_id.separator=%7C_id.encapsulator=%22=true=%7C=%22_simplequery_search.split=true_simplequery_search.separator=%7C_simplequery_search.encapsulator=%22_map.split=true_map.separator=%7C_map.encapsulator=%22=true=%7C=%22_group_id.split=true_group_id.separator=%7C_group_id.encapsulator=%22_group_name.split=true_group_name.separator=%7C_group_name.encapsulator=%22=javabin=2 
HTTP/1.1" 409 156


A new core is created, then DSpace GETs a CSV from Solr, tries to POST 
it to the new core and is greeted with an HTTP 409 error. I just 
Googled for "HTTP 409 solr" and found some mentions of optimistic 
concurrency and version conflicts. Interesting! This indeed sounds a 
lot like what I've read in some Jira issues. Could this be the problem 
fixed in DSpace 5.7, Terry?


Our Solr statistics core has something like 80 million documents so 
I'm really hoping to be able to shard it eventually!


Thank you,

On Wed, Jan 10, 2018 at 7:04 PM Terry Brady 
> wrote:


Alan,

There were some bug fixes to the Solr Sharding process in DSpace
5.7.  See

https://wiki.duraspace.org/display/~terrywbrady/Statistics+Import+Export+Issues


for details.

I am running DSpace 5.8 and I was able to shard successfully.

https://wiki.duraspace.org/display/DSDOC5x/SOLR+Statistics+Maintenance#SOLRStatisticsMaintenance-SolrShardingByYear

Terry

On Wed, Jan 10, 2018 at 6:07 AM, Mark H. Wood
> wrote:

Does the server log anything interesting?  It seems to be
dropping the connection. I suspect a timeout of some sort, on
the server side.
-- 
You received this message because you are subscribed to the

Google Groups "DSpace Technical Support" group.
To unsubscribe from this group and stop receiving emails from
it, send an email to dspace-tech+unsubscr...@googlegroups.com
.
To post to this group, send email to
dspace-tech@googlegroups.com
.
Visit 

Re: [dspace-tech] Re: IOException in yearly Solr sharding

2018-01-10 Thread Alan Orth
@Mark, I looked in DSpace's solr.log and see that the new Solr core is
created, then it does some stuff and eventually closes it without an error.
Neither Tomcat's catalina.out nor localhost.log have any errors around the
time I attempted to shard on my local development machine. There might be a
hint here in Tomcat's localhost_access_log, though:

127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET
/solr/statistics/select?q=type%3A2+AND+id%3A1=javabin=2
HTTP/1.1" 200 107
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET
/solr/statistics/select?q=*%3A*=0=true=time=NOW%2FYEAR-18YEARS=NOW%2FYEAR%2B0YEARS=%2B1YEAR=1=javabin=2
HTTP/1.1" 200 447
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET
/solr/admin/cores?action=STATUS=statistics-2016=true=javabin=2
HTTP/1.1" 200 76
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET
/solr/admin/cores?action=CREATE=statistics-2016=statistics=%2FUsers%2Faorth%2Fdspace%2Fsolr%2Fstatistics-2016%2Fdata=javabin=2
HTTP/1.1" 200 63
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET
/solr/statistics/select?csv.mv.separator=%7C=*%3A*=time%3A%28%5B2016%5C-01%5C-01T00%5C%3A00%5C%3A00Z+TO+2017%5C-01%5C-01T00%5C%3A00%5C%3A00Z%5D+NOT+2017%5C-01%5C-01T00%5C%3A00%5C%3A00Z%29=1=csv
HTTP/1.1" 200 2137630
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "GET
/solr/statistics/admin/luke?show=schema=javabin=2 HTTP/1.1" 200
16253
127.0.0.1 - - [10/Jan/2018:10:51:19 +0200] "POST
/solr//statistics-2016/update/csv?commit=true=false=true=true=%7C=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22_communities_map.split=true_communities_map.separator=%7C_communities_map.encapsulator=%22_query_search.split=true_query_search.separator=%7C_query_search.encapsulator=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22_query_search.split=true_query_search.separator=%7C_query_search.encapsulator=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22_group_map.split=true_group_map.separator=%7C_group_map.encapsulator=%22=true=%7C=%22=true=%7C=%22_name.split=true_name.separator=%7C_name.encapsulator=%22_communities_name.split=true_communities_name.separator=%7C_communities_name.encapsulator=%22=true=%7C=%22=true=%7C=%22=true=%7C=%22_query_search.split=true_query_search.separator=%7C_query_search.encapsulator=%22_communities_id.split=true_communities_id.separator=%7C_communities_id.encapsulator=%22=true=%7C=%22_id.split=true_id.separator=%7C_id.encapsulator=%22=true=%7C=%22_simplequery_search.split=true_simplequery_search.separator=%7C_simplequery_search.encapsulator=%22_map.split=true_map.separator=%7C_map.encapsulator=%22=true=%7C=%22_group_id.split=true_group_id.separator=%7C_group_id.encapsulator=%22_group_name.split=true_group_name.separator=%7C_group_name.encapsulator=%22=javabin=2
HTTP/1.1" 409 156

A new core is created, then DSpace GETs a CSV from Solr, tries to POST it
to the new core and is greeted with an HTTP 409 error. I just Googled for
"HTTP 409 solr" and found some mentions of optimistic concurrency and
version conflicts. Interesting! This indeed sounds a lot like what I've
read in some Jira issues. Could this be the problem fixed in DSpace 5.7,
Terry?

Our Solr statistics core has something like 80 million documents so I'm
really hoping to be able to shard it eventually!

Thank you,

On Wed, Jan 10, 2018 at 7:04 PM Terry Brady 
wrote:

> Alan,
>
> There were some bug fixes to the Solr Sharding process in DSpace 5.7.  See
> https://wiki.duraspace.org/display/~terrywbrady/Statistics+Import+Export+Issues
> for details.
>
> I am running DSpace 5.8 and I was able to shard successfully.
> https://wiki.duraspace.org/display/DSDOC5x/SOLR+Statistics+Maintenance#SOLRStatisticsMaintenance-SolrShardingByYear
>
> Terry
>
> On Wed, Jan 10, 2018 at 6:07 AM, Mark H. Wood 
> wrote:
>
>> Does the server log anything interesting?  It seems to be dropping the
>> connection.  I suspect a timeout of some sort, on the server side.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "DSpace Technical Support" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to dspace-tech+unsubscr...@googlegroups.com.
>> To post to this group, send email to dspace-tech@googlegroups.com.
>> Visit this group at https://groups.google.com/group/dspace-tech.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> Terry Brady
> Applications Programmer Analyst
> Georgetown University Library Information Technology
> https://github.com/terrywbrady/info
> 425-298-5498 <(425)%20298-5498> (Seattle, WA)
>
> --
> You received this message because you are subscribed to the Google Groups
> "DSpace Technical Support" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dspace-tech+unsubscr...@googlegroups.com.
> To post to this group, send email to dspace-tech@googlegroups.com.
> Visit this group at https://groups.google.com/group/dspace-tech.
> For more options, visit 

[dspace-tech] After a database migration from 1.8 to 5.8, post-migration discovery reindex take 5 days - we have 190.00o items

2018-01-10 Thread Manuela Ferreira
Hello!

In 5.X dspace manual
https://wiki.duraspace.org/display/DSDOC5x/Upgrading+DSpace, we see:
"For large repository instances, this process could take some time to
complete."

For us, this post-migration discovery reindex take 5 days.
While a "index-discovery -b" take 10 hours.
Our repository has 190.000 items.

I ask:
* Is it normal to take 5 days to the post-migration reindex?
* Why does the post-migration reindex take more time than "index-discovery
-b"?

Thanks in advance
Manuela Klanovicz Ferreira

-- 
You received this message because you are subscribed to the Google Groups 
"DSpace Technical Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dspace-tech+unsubscr...@googlegroups.com.
To post to this group, send email to dspace-tech@googlegroups.com.
Visit this group at https://groups.google.com/group/dspace-tech.
For more options, visit https://groups.google.com/d/optout.


Re: [dspace-tech] Dspace 5 Startup .xsl code

2018-01-10 Thread Terry Brady
Tom,

Do you see any errors in your JavaScript console?

Try replacing "$" with "jQuery" and see if that helps.  When adding JQuery
to my DSpace instance, I have found that JQuery is not always initialized
as I expect.

Terry

On Tue, Jan 2, 2018 at 6:49 AM, Tom Avino  wrote:

> Terry,
> I have had no luck getting this to work.
> I am using a cookie, but it is not creating.  I have the screen as
> popup-overlay below.  I have all of this in page_structure. Any idea what I
> am doing wrong?
> .
> * src="https://cdnjs.cloudflare.com/ajax/libs/jquery-cookie/1.4.1/jquery.cookie.js
> <https://cdnjs.cloudflare.com/ajax/libs/jquery-cookie/1.4.1/jquery.cookie.js>">*
> **
> *$(document).ready(function() {*
> *if (!$.cookie('alert')) {*
> *$('.popup-overlay').show();*
> *var date = new Date();*
> *date.setTime(date.getTime() + 24 * 60 * 60 * 1000);*
> *$.cookie('alert', true, {*
> *expires: date*
> *});*
> *}else{*
> *$('.popup-overlay').hide();*
> * }*
> *});*
> **
>
> **
> **
> **
> **
> *WARNING!!! By accessing and using this information system,
> you acknowledge and consent to the following:*
> *You are accessing a U.S. Government information
> system, which includes: (1) this computer; (2) this computer network; (3)
> all computers connected to this network including end user systems; (4) all
> devices and storage media attached to this network or to any computer on
> this network; and (5) cloud and remote information services. This
> information system is provided for U.S. Government-authorized use only. You
> have no reasonable expectation of privacy regarding any communication
> transmitted through or data stored on this information system.  At any
> time, and for any lawful purpose, the U.S. Government may monitor,
> intercept, search, and seize any communication or data transiting, stored
> on, or traveling to or from this information system.  You are NOT
> authorized to process classified information on this information system.
> Unauthorized or improper use of this system may result in suspension or
> loss of access privileges, disciplinary action, and civil and/or criminal
> penalties.*
>
> **
> *   data-ix="hide-popup-overly" href="#">OK *
> **
> **
> **
> **
>
>
> *On Wednesday, November 22, 2017 at 2:09:21 PM UTC-5, Terry Brady wrote:*
>>
>> *I recommend adding a link to a custom javascript file here.*
>>
>>
>> *https://github.com/DSpace/DSpace/blob/dspace-5_x/dspace-xmlui-mirage2/src/main/webapp/xsl/core/page-structure.xsl#L130
>> *
>>
>> *Mirage2 compiles all of the javascript into a single file for
>> efficiency.  Since my custom javascript is small, I load it as a page on
>> its own.*
>>
>> *Your javascript could use a cookie to display the message only once, or
>> it could detect when it is on the home page and display it on that page.*
>>
>> *Mirage2 should already have jQuery loaded although the "$" variable may
>> not be active.*
>>
>> *The following code could detect if you are on the home page.*
>>
>>
>>>
>>>
>>>
>>> *jQuery(document).ready(function(){if
>>> (jQuery("#aspect_artifactbrowser_CommunityBrowser_div_comunity-browser").length
>>> > 0) {//do something}});*
>>
>>
>
>
>> On Wed, Nov 22, 2017 at 10:30 AM, Tom Avino  wrote:
>>
>>> Terry,
>>> It doesn't matter if I do this in Javascript or xsl.  I am not sure
>>> where to put the javascript code.
>>> I can put in the code I sent, and it works, but I only need it to show
>>> up the first time, and only once.  When I put it in the page-structure.xsl,
>>> it shows up on every page. I guess I am looking for a "Startup" type page
>>> where some initialization and housekeeping is done.
>>>
>>> I am new to xsl, and it gives me a headache.  I am an old school
>>> assembler language, cobol, html, and SQL developer and can't seem to wrap
>>> my head around xsl.
>>>
>>> I appreciate you help
>>>
>>> On Wednesday, November 22, 2017 at 12:15:03 PM UTC-5, Terry Brady wrote:

 If you want to resolve the issue in XSL (vs JavaScript), I recommend
 that you override this template and place your alert in a similar location.

 https://github.com/DSpace/DSpace/blob/dspace-5_x/dspace-xmlu
 i-mirage2/src/main/webapp/xsl/core/page-structure.xsl#L759-L762

 On Wed, Nov 22, 2017 at 8:51 AM, Tom Avino  wrote:

> We are running 

Re: [dspace-tech] Re: IOException in yearly Solr sharding

2018-01-10 Thread Terry Brady
Alan,

There were some bug fixes to the Solr Sharding process in DSpace 5.7.  See
https://wiki.duraspace.org/display/~terrywbrady/Statistics+Import+Export+Issues
for details.

I am running DSpace 5.8 and I was able to shard successfully.
https://wiki.duraspace.org/display/DSDOC5x/SOLR+Statistics+Maintenance#SOLRStatisticsMaintenance-SolrShardingByYear

Terry

On Wed, Jan 10, 2018 at 6:07 AM, Mark H. Wood  wrote:

> Does the server log anything interesting?  It seems to be dropping the
> connection.  I suspect a timeout of some sort, on the server side.
>
> --
> You received this message because you are subscribed to the Google Groups
> "DSpace Technical Support" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dspace-tech+unsubscr...@googlegroups.com.
> To post to this group, send email to dspace-tech@googlegroups.com.
> Visit this group at https://groups.google.com/group/dspace-tech.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Terry Brady
Applications Programmer Analyst
Georgetown University Library Information Technology
https://github.com/terrywbrady/info
425-298-5498 (Seattle, WA)

-- 
You received this message because you are subscribed to the Google Groups 
"DSpace Technical Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dspace-tech+unsubscr...@googlegroups.com.
To post to this group, send email to dspace-tech@googlegroups.com.
Visit this group at https://groups.google.com/group/dspace-tech.
For more options, visit https://groups.google.com/d/optout.


[dspace-tech] Re: IOException in yearly Solr sharding

2018-01-10 Thread Mark H. Wood
Does the server log anything interesting?  It seems to be dropping the 
connection.  I suspect a timeout of some sort, on the server side.

-- 
You received this message because you are subscribed to the Google Groups 
"DSpace Technical Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dspace-tech+unsubscr...@googlegroups.com.
To post to this group, send email to dspace-tech@googlegroups.com.
Visit this group at https://groups.google.com/group/dspace-tech.
For more options, visit https://groups.google.com/d/optout.


Re: [dspace-tech] I can not save the articles (Periodic scanning of the external database)

2018-01-10 Thread gonzalo perl
Hi Andrea

Yes, i am wrong, sorry. I modified a line of bte.xml and I was able to 
import the Scopus articles. But Cris adds them like accepted submissions 
and does not save them in the workflow. While Pubmed if it saves it in the 
workflow.


Regards
Gonzalo
 


El martes, 9 de enero de 2018, 15:07:31 (UTC), Bollini Andrea escribió:
>
> Hi Gonzalo,
>
> CATALINA_OPTS are only used when your run tomcat. The script run outside 
> tomcat so you should configure the JAVA_OPTS to raise the memory
>
> Andrea
>
> Il 09/01/2018 16:03, gonzalo perl ha scritto:
>
> Hello
>
> I have running ScopusFeed module, but i get Java memory error again:
>
>
> C:\dspace\bin>dspace dsrun org.dspace.app.cris.batch.ScopusFeed -p 
> 
> Using DSpace installation in: C:\dspace
> Exception: Java heap space
> java.lang.OutOfMemoryError: Java heap space
> at org.dspace.app.util.XMLUtils.getElementList(XMLUtils.java:38)
> at org.dspace.app.util.XMLUtils.getSingleElement(XMLUtils.java:116)
> at org.dspace.app.util.XMLUtils.getElementValue(XMLUtils.java:90)
> at 
> org.dspace.submit.lookup.ScopusUtils.convertScopusDomToRecord(ScopusUtils.java:118)
> at 
> org.dspace.submit.lookup.ScopusService.search(ScopusService.java:164)
> at 
> org.dspace.submit.lookup.ScopusOnlineDataLoader.search(ScopusOnlineDataLoader.java:152)
> at 
> org.dspace.app.cris.batch.ScopusFeed.convertToImpRecordItem(ScopusFeed.java:316)
> at org.dspace.app.cris.batch.ScopusFeed.main(ScopusFeed.java:212)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at 
> org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:226)
> at 
> org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:78)
>
> C:\dspace\bin>
>
>
> My  enviroment variables have the following values:
>
> CATALINA OPTS= -Xmx2048M -Xms1024M -Dfile.encoding=UTF-8 
> -Dorg.apache.el.parser.SKIP_IDENTIFIER_CHECK=true
>
> Any idea about the error?
>
> Cheers
> Gonzalo
>
>
> El lunes, 8 de enero de 2018, 13:24:52 (UTC), gonzalo perl escribió: 
>>
>> I added more memory to Jave but, don´t added the publications to workflow.
>>
>>
>> C:\dspace\bin>dspace dsrun org.dspace.app.cris.batch.ScopusFeed -p 
>> >
>>  
>> Imported 0 record; 0 marked as removed
>>
>> Any idea?
>>
>> Cheers
>> Gonzalo
>>
>>
>> El lunes, 8 de enero de 2018, 11:14:44 (UTC), gonzalo perl escribió: 
>>>
>>> Hi Andrea
>>>
>>> I have changed the date and i have added the affilation in dspace.cfg 
>>> (via build.propierties) and i haven´t changed scopusfeed.cfg
>>>
>>> (Build. propierties)
>>> #query.param.default=affilorg("Your institution")
>>> scopus.query.param.default=affilorg(Universidad de Las Palmas de Gran 
>>> Canaria)
>>>
>>> Scopusfeed.cfg
>>>
>>> scopus.query.param.default=${scopus.query.param.default}
>>>
>>> And i get the following error:
>>>
>>> C:\dspace\bin>dspace dsrun org.dspace.app.cris.batch.ScopusFeed -p 
>>> >> Using DSpace installation in: C:\dspace
>>> Exception: Java heap space
>>> java.lang.OutOfMemoryError: Java heap space
>>> at java.util.Arrays.copyOf(Unknown Source)
>>> at java.util.Arrays.copyOf(Unknown Source)
>>> at java.util.ArrayList.grow(Unknown Source)
>>> at java.util.ArrayList.ensureExplicitCapacity(Unknown Source)
>>> at java.util.ArrayList.ensureCapacityInternal(Unknown Source)
>>> at java.util.ArrayList.add(Unknown Source)
>>> at org.dspace.app.util.XMLUtils.getElementList(XMLUtils.java:44)
>>> at 
>>> org.dspace.app.util.XMLUtils.getSingleElement(XMLUtils.java:116)
>>> at org.dspace.app.util.XMLUtils.getElementValue(XMLUtils.java:90)
>>> at 
>>> org.dspace.submit.lookup.ScopusUtils.convertScopusDomToRecord(ScopusUtils.java:152)
>>> at 
>>> org.dspace.submit.lookup.ScopusService.search(ScopusService.java:164)
>>> at 
>>> org.dspace.submit.lookup.ScopusOnlineDataLoader.search(ScopusOnlineDataLoader.java:152)
>>> at 
>>> org.dspace.app.cris.batch.ScopusFeed.convertToImpRecordItem(ScopusFeed.java:316)
>>> at org.dspace.app.cris.batch.ScopusFeed.main(ScopusFeed.java:212)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown 
>>> Source)
>>> at java.lang.reflect.Method.invoke(Unknown Source)
>>> at 
>>> org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:226)
>>> at 
>>> org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:78)
>>>
>>> C:\dspace\bin>
>>>
>>>
>>> Cheers
>>> Gonzalo
>>>
>>>
>>> El domingo, 7 de enero de 2018, 19:15:20 (UTC), Bollini