[jira] [Comment Edited] (SOLR-6875) No data integrity between replicas

2015-01-11 Thread Alexander S. (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272877#comment-14272877
 ] 

Alexander S. edited comment on SOLR-6875 at 1/11/15 11:33 AM:
--

Now we have 4 shards, each with 2 replicas (8 total nodes) and the next picture:
{noformat}
Shard 1:
  Replica 1: *14 486 089*
  Replica 2: *14 496 445*

Shard 2
  Replica 1: 14 496 609
  Replica 2: 14 496 609

Shard 3
  Replica 1: 14 492 812
  Replica 2: 14 492 812

Shard 4
  Replica 1: 14 488 755
  Replica 2: 14 488 755
{noformat}

How could it be? We didn't see anything like that before upgrade from 4.8.1 to 
4.10.2. Also we enabled checkIntegrityAtMerge, could it be the reason?


was (Author: aheaven):
Now we have 4 shards, each with 2 replicas (8 total nodes) and the next picture:
{noformat}
Shard 1:
  Replica 1: 14 486 089
  Replica 2: 14 496 445

Shard 2
  Replica 1: 14 496 609
  Replica 2: 14 496 609

Shard 3
  Replica 1: 14 492 812
  Replica 2: 14 492 812

Shard 4
  Replica 1: 14 488 755
  Replica 2: 14 488 755
{noformat}

How could it be? We didn't see anything like that before upgrade from 4.8.1 to 
4.10.2. Also we enabled checkIntegrityAtMerge, could it be the reason?

 No data integrity between replicas
 --

 Key: SOLR-6875
 URL: https://issues.apache.org/jira/browse/SOLR-6875
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.2
 Environment: One replica is @ Linux solr1.devops.wegohealth.com 
 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 
 x86_64 x86_64 GNU/Linux
 Another replica is @ Linux solr2.devops.wegohealth.com 3.16.0-23-generic 
 #30-Ubuntu SMP Thu Oct 16 13:17:16 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
 Solr is running with the next options:
 * -Xms12G
 * -Xmx16G
 * -XX:+UseConcMarkSweepGC
 * -XX:+UseLargePages
 * -XX:+CMSParallelRemarkEnabled
 * -XX:+ParallelRefProcEnabled
 * -XX:+UseLargePages
 * -XX:+AggressiveOpts
 * -XX:CMSInitiatingOccupancyFraction=75
Reporter: Alexander S.

 Setup: SolrCloud with 2 shards, each with 2 replicas, 4 nodes in total.
 Indexing is stopped, one replica of a shard (Solr1) shows 45 574 039 docs, 
 and another (Solr1.1) 45 574 038 docs.
 Solr1 is the leader, these errors appeared in the logs:
 {code}
 ERROR - 2014-12-20 09:54:38.783; 
 org.apache.solr.update.StreamingSolrServers$1; error
 java.net.SocketException: Connection reset
 at java.net.SocketInputStream.read(SocketInputStream.java:196)
 at java.net.SocketInputStream.read(SocketInputStream.java:122)
 at 
 org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
 at 
 org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
 at 
 org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
 at 
 org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
 at 
 org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
 at 
 org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
 at 
 org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
 at 
 org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
 at 
 org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
 at 
 org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
 at 
 org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
 at 
 org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
 at 
 org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:233)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 WARN  - 2014-12-20 09:54:38.787; 
 org.apache.solr.update.processor.DistributedUpdateProcessor; 

[jira] [Commented] (SOLR-6875) No data integrity between replicas

2015-01-11 Thread Alexander S. (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272877#comment-14272877
 ] 

Alexander S. commented on SOLR-6875:


Now we have 4 shards, each with 2 replics (8 total nodes) and the next picture:
{noformat}
Shard 1:
  Replica 1: 14 486 089
  Replica 2: 14 496 445

Shard 2
  Replica 1: 14 496 609
  Replica 2: 14 496 609

Shard 3
  Replica 1: 14 492 812
  Replica 2: 14 492 812

Shard 4
  Replica 1: 14 488 755
  Replica 2: 14 488 755
{noformat}

How could it be? We didn't see anything like that before upgrade from 4.8.1 to 
4.10.2. Also we enabled checkIntegrityAtMerge, could it be the reason?

 No data integrity between replicas
 --

 Key: SOLR-6875
 URL: https://issues.apache.org/jira/browse/SOLR-6875
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.2
 Environment: One replica is @ Linux solr1.devops.wegohealth.com 
 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 
 x86_64 x86_64 GNU/Linux
 Another replica is @ Linux solr2.devops.wegohealth.com 3.16.0-23-generic 
 #30-Ubuntu SMP Thu Oct 16 13:17:16 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
 Solr is running with the next options:
 * -Xms12G
 * -Xmx16G
 * -XX:+UseConcMarkSweepGC
 * -XX:+UseLargePages
 * -XX:+CMSParallelRemarkEnabled
 * -XX:+ParallelRefProcEnabled
 * -XX:+UseLargePages
 * -XX:+AggressiveOpts
 * -XX:CMSInitiatingOccupancyFraction=75
Reporter: Alexander S.

 Setup: SolrCloud with 2 shards, each with 2 replicas, 4 nodes in total.
 Indexing is stopped, one replica of a shard (Solr1) shows 45 574 039 docs, 
 and another (Solr1.1) 45 574 038 docs.
 Solr1 is the leader, these errors appeared in the logs:
 {code}
 ERROR - 2014-12-20 09:54:38.783; 
 org.apache.solr.update.StreamingSolrServers$1; error
 java.net.SocketException: Connection reset
 at java.net.SocketInputStream.read(SocketInputStream.java:196)
 at java.net.SocketInputStream.read(SocketInputStream.java:122)
 at 
 org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
 at 
 org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
 at 
 org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
 at 
 org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
 at 
 org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
 at 
 org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
 at 
 org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
 at 
 org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
 at 
 org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
 at 
 org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
 at 
 org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
 at 
 org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
 at 
 org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:233)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 WARN  - 2014-12-20 09:54:38.787; 
 org.apache.solr.update.processor.DistributedUpdateProcessor; Error sending 
 update
 java.net.SocketException: Connection reset
 at java.net.SocketInputStream.read(SocketInputStream.java:196)
 at java.net.SocketInputStream.read(SocketInputStream.java:122)
 at 
 org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
 at 
 org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
 at 
 org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
 at 
 

[jira] [Comment Edited] (SOLR-6875) No data integrity between replicas

2015-01-11 Thread Alexander S. (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272877#comment-14272877
 ] 

Alexander S. edited comment on SOLR-6875 at 1/11/15 11:33 AM:
--

Now we have 4 shards, each with 2 replicas (8 total nodes) and the next picture:
{noformat}
Shard 1:
  Replica 1: 14 486 089
  Replica 2: 14 496 445

Shard 2
  Replica 1: 14 496 609
  Replica 2: 14 496 609

Shard 3
  Replica 1: 14 492 812
  Replica 2: 14 492 812

Shard 4
  Replica 1: 14 488 755
  Replica 2: 14 488 755
{noformat}

How could it be? We didn't see anything like that before upgrade from 4.8.1 to 
4.10.2. Also we enabled checkIntegrityAtMerge, could it be the reason?


was (Author: aheaven):
Now we have 4 shards, each with 2 replics (8 total nodes) and the next picture:
{noformat}
Shard 1:
  Replica 1: 14 486 089
  Replica 2: 14 496 445

Shard 2
  Replica 1: 14 496 609
  Replica 2: 14 496 609

Shard 3
  Replica 1: 14 492 812
  Replica 2: 14 492 812

Shard 4
  Replica 1: 14 488 755
  Replica 2: 14 488 755
{noformat}

How could it be? We didn't see anything like that before upgrade from 4.8.1 to 
4.10.2. Also we enabled checkIntegrityAtMerge, could it be the reason?

 No data integrity between replicas
 --

 Key: SOLR-6875
 URL: https://issues.apache.org/jira/browse/SOLR-6875
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.2
 Environment: One replica is @ Linux solr1.devops.wegohealth.com 
 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 
 x86_64 x86_64 GNU/Linux
 Another replica is @ Linux solr2.devops.wegohealth.com 3.16.0-23-generic 
 #30-Ubuntu SMP Thu Oct 16 13:17:16 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
 Solr is running with the next options:
 * -Xms12G
 * -Xmx16G
 * -XX:+UseConcMarkSweepGC
 * -XX:+UseLargePages
 * -XX:+CMSParallelRemarkEnabled
 * -XX:+ParallelRefProcEnabled
 * -XX:+UseLargePages
 * -XX:+AggressiveOpts
 * -XX:CMSInitiatingOccupancyFraction=75
Reporter: Alexander S.

 Setup: SolrCloud with 2 shards, each with 2 replicas, 4 nodes in total.
 Indexing is stopped, one replica of a shard (Solr1) shows 45 574 039 docs, 
 and another (Solr1.1) 45 574 038 docs.
 Solr1 is the leader, these errors appeared in the logs:
 {code}
 ERROR - 2014-12-20 09:54:38.783; 
 org.apache.solr.update.StreamingSolrServers$1; error
 java.net.SocketException: Connection reset
 at java.net.SocketInputStream.read(SocketInputStream.java:196)
 at java.net.SocketInputStream.read(SocketInputStream.java:122)
 at 
 org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
 at 
 org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
 at 
 org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
 at 
 org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
 at 
 org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
 at 
 org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
 at 
 org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
 at 
 org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
 at 
 org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
 at 
 org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
 at 
 org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
 at 
 org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
 at 
 org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:233)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 WARN  - 2014-12-20 09:54:38.787; 
 org.apache.solr.update.processor.DistributedUpdateProcessor; Error 

[jira] [Comment Edited] (SOLR-6875) No data integrity between replicas

2015-01-11 Thread Alexander S. (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272877#comment-14272877
 ] 

Alexander S. edited comment on SOLR-6875 at 1/11/15 11:33 AM:
--

Now we have 4 shards, each with 2 replicas (8 total nodes) and the next picture:
{noformat}
Shard 1:
  Replica 1: 14 486 089
  Replica 2: 14 496 445

Shard 2
  Replica 1: 14 496 609
  Replica 2: 14 496 609

Shard 3
  Replica 1: 14 492 812
  Replica 2: 14 492 812

Shard 4
  Replica 1: 14 488 755
  Replica 2: 14 488 755
{noformat}

How could it be? We didn't see anything like that before upgrade from 4.8.1 to 
4.10.2. Also we enabled checkIntegrityAtMerge, could it be the reason?


was (Author: aheaven):
Now we have 4 shards, each with 2 replicas (8 total nodes) and the next picture:
{noformat}
Shard 1:
  Replica 1: *14 486 089*
  Replica 2: *14 496 445*

Shard 2
  Replica 1: 14 496 609
  Replica 2: 14 496 609

Shard 3
  Replica 1: 14 492 812
  Replica 2: 14 492 812

Shard 4
  Replica 1: 14 488 755
  Replica 2: 14 488 755
{noformat}

How could it be? We didn't see anything like that before upgrade from 4.8.1 to 
4.10.2. Also we enabled checkIntegrityAtMerge, could it be the reason?

 No data integrity between replicas
 --

 Key: SOLR-6875
 URL: https://issues.apache.org/jira/browse/SOLR-6875
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.2
 Environment: One replica is @ Linux solr1.devops.wegohealth.com 
 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 
 x86_64 x86_64 GNU/Linux
 Another replica is @ Linux solr2.devops.wegohealth.com 3.16.0-23-generic 
 #30-Ubuntu SMP Thu Oct 16 13:17:16 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
 Solr is running with the next options:
 * -Xms12G
 * -Xmx16G
 * -XX:+UseConcMarkSweepGC
 * -XX:+UseLargePages
 * -XX:+CMSParallelRemarkEnabled
 * -XX:+ParallelRefProcEnabled
 * -XX:+UseLargePages
 * -XX:+AggressiveOpts
 * -XX:CMSInitiatingOccupancyFraction=75
Reporter: Alexander S.

 Setup: SolrCloud with 2 shards, each with 2 replicas, 4 nodes in total.
 Indexing is stopped, one replica of a shard (Solr1) shows 45 574 039 docs, 
 and another (Solr1.1) 45 574 038 docs.
 Solr1 is the leader, these errors appeared in the logs:
 {code}
 ERROR - 2014-12-20 09:54:38.783; 
 org.apache.solr.update.StreamingSolrServers$1; error
 java.net.SocketException: Connection reset
 at java.net.SocketInputStream.read(SocketInputStream.java:196)
 at java.net.SocketInputStream.read(SocketInputStream.java:122)
 at 
 org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
 at 
 org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
 at 
 org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
 at 
 org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
 at 
 org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
 at 
 org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
 at 
 org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
 at 
 org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
 at 
 org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
 at 
 org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
 at 
 org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
 at 
 org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
 at 
 org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:233)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 WARN  - 2014-12-20 09:54:38.787; 
 org.apache.solr.update.processor.DistributedUpdateProcessor; 

[jira] [Commented] (SOLR-6840) Remove legacy solr.xml mode

2015-01-11 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272864#comment-14272864
 ] 

Alan Woodward commented on SOLR-6840:
-

I'm only changing the createServers() method at the moment, which I think fits 
with Ram's patch, so feel free to commit that one.

 Remove legacy solr.xml mode
 ---

 Key: SOLR-6840
 URL: https://issues.apache.org/jira/browse/SOLR-6840
 Project: Solr
  Issue Type: Task
Reporter: Steve Rowe
Assignee: Erick Erickson
Priority: Blocker
 Fix For: 5.0

 Attachments: SOLR-6840.patch, SOLR-6840.patch, SOLR-6840.patch


 On the [Solr Cores and solr.xml 
 page|https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml],
  the Solr Reference Guide says:
 {quote}
 Starting in Solr 4.3, Solr will maintain two distinct formats for 
 {{solr.xml}}, the _legacy_ and _discovery_ modes. The former is the format we 
 have become accustomed to in which all of the cores one wishes to define in a 
 Solr instance are defined in {{solr.xml}} in 
 {{corescore/...core//cores}} tags. This format will continue to be 
 supported through the entire 4.x code line.
 As of Solr 5.0 this form of solr.xml will no longer be supported.  Instead 
 Solr will support _core discovery_. [...]
 The new core discovery mode structure for solr.xml will become mandatory as 
 of Solr 5.0, see: Format of solr.xml.
 {quote}
 AFAICT, nothing has been done to remove legacy {{solr.xml}} mode from 5.0 or 
 trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr geospatial index?

2015-01-11 Thread Matteo Tarantino
Wow, thank you David!
You are really kind to spend your time writing all these informations to
me. This will be very helpful for my thesis work.

Thank you again.
MT



2015-01-11 2:46 GMT+01:00 david.w.smi...@gmail.com david.w.smi...@gmail.com
:

 Hello Matteo,

 Welcome. You are not bothering/me-us; you are asking in the right place.

 Jack’s right in terms of the field type dictating how it works.

 LatLonType, simply stores the latitude and longitude internally as
 separate floating point fields and it does efficient range queries over
 them for bounding-box queries.  Lucene has remarkably fast/efficient range
 queries over numbers based on a Trie/PrefixTree. In fact systems like
 TitanDB leave such queries to Lucene.  For point-radius, it iterates over
 all of them in-memory in a brute-force fashion (not scalable but may be
 fine).

 BBoxField is similar in spirit to LatLonType; each side of an indexed
 rectangle gets its own floating point field internally.

 Note that for both listed above, the underlying storage and range queries
 use built-in numeric fields.

 SpatialRecursivePrefixTreeFieldType (RPT for short) is interesting in that
 it supports indexing essentially any shape by representing the indexed
 shape as multiple grid squares.  Non-point shapes (e.g. a polygon) are
 approximated; if you need accuracy, you should additionally store the
 vector geometry and validate the results in a 2nd pass (see
 SerializedDVStrategy for help with that).  RPT, like Lucene’s numeric
 fields, uses a Trie/PrefixTree but encodes two dimensions, not one.

 The Trie/PrefixTree concept underlies both RPT and numeric fields, which
 are approaches to using Lucene’s terms index to encode prefixes.  So the
 big point here is that Lucene/Solr doesn’t have side indexes using
 fundamentally different technologies for different types of data; no;
 Lucene’s one versatile index looks up terms (for keyword search), numbers,
 AND 2-d spatial.  For keyword search, the term is a word, for numbers, the
 term represents a contiguous range of values (e.g. 100-200), and for 2-d
 spatial, a term is a grid square (a 2-D range).

 I am aware many other DBs put spatial data in R-Trees, and I have no
 interest investing energy in doing that in Lucene.  That isn’t to say I
 think that other DBs shouldn’t be using R-Trees.  I think a system based on
 sorted keys/terms (like Lucene and Cassandra, Accumulo, HBase, and others)
 already have a powerful/versatile index such that it doesn’t warrant
 complexity in adding something different.  And Lucene’s underlying index
 continues to improve.  I am most excited about an “auto-prefixing”
 technique McCandless has been working on that will bring performance up to
 the next level for numeric  spatial data in Lucene’s index.

 If you’d like to learn more about RPT and Lucene/Solr spatial, I suggest
 my “Spatial Deep Dive” presentation at Lucene Revolution in San Diego, May
 2013:  Lucene / Solr 4 Spatial Deep Dive
 https://www.youtube.com/watch?v=L2cUGv0Rebslist=PLsj1Ri57ZE94ulvk2vI_WoJrDYs3ckmH0index=31
 Also, my article here illustrates some RPT concepts in terms of indexing:
 http://opensourceconnections.com/blog/2014/04/11/indexing-polygons-in-lucene-with-accuracy/

 ~ David Smiley
 Freelance Apache Lucene/Solr Search Consultant/Developer
 http://www.linkedin.com/in/davidwsmiley

 On Sat, Jan 10, 2015 at 10:26 AM, Matteo Tarantino 
 matteo.tarant...@gmail.com wrote:

 Hi all,
 I hope to not bother you, but I think I'm writing to the only mailing
 list that can help me with my question.

 I am writing my master thesis about Geographical Information Retrieval
 (GIR) and I'm using Solr to create a little geospatial search engine.
 Reading  papers about GIR I noticed that these systems use a separate data
 structure (like an R-tree http://it.wikipedia.org/wiki/R-tree) to save
 geographical coordinates of documents, but I have found nothing about how
 Solr manages coordinates.

 Can someone help me, and most of all, can someone address me to documents
 that talk about how and where Solr saves spatial informations?

 Thank you in advance
 Matteo





Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 730 - Still Failing

2015-01-11 Thread Michael McCandless
Thanks Rob.  This test does easily eat file descriptors...

Mike McCandless

http://blog.mikemccandless.com

On Sat, Jan 10, 2015 at 3:31 PM, Robert Muir rcm...@gmail.com wrote:
 I committed a fix.

 On Sat, Jan 10, 2015 at 9:44 AM, Apache Jenkins Server
 jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/730/

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.index.TestDemoParallelLeafReader.testRandomMultipleSchemaGensSameField

 Error Message:
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build/core/test/J3/temp/lucene.index.TestDemoParallelLeafReader
  
 78910BB799798171-001/tempDir-002/index/_az_TestBloomFilteredLucenePostings_0.tim:
  Too many open files

 Stack Trace:
 java.nio.file.FileSystemException: 
 /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build/core/test/J3/temp/lucene.index.TestDemoParallelLeafReader
  
 78910BB799798171-001/tempDir-002/index/_az_TestBloomFilteredLucenePostings_0.tim:
  Too many open files
 at 
 org.apache.lucene.mockfile.HandleLimitFS.onOpen(HandleLimitFS.java:49)
 at 
 org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:84)
 at 
 org.apache.lucene.mockfile.HandleTrackingFS.newOutputStream(HandleTrackingFS.java:157)
 at java.nio.file.Files.newOutputStream(Files.java:172)
 at 
 org.apache.lucene.store.FSDirectory$FSIndexOutput.init(FSDirectory.java:265)
 at 
 org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:214)
 at 
 org.apache.lucene.store.MockDirectoryWrapper.createOutput(MockDirectoryWrapper.java:607)
 at 
 org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
 at 
 org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.init(BlockTreeTermsWriter.java:278)
 at 
 org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsConsumer(Lucene50PostingsFormat.java:433)
 at 
 org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat.fieldsConsumer(BloomFilteringPostingsFormat.java:147)
 at 
 org.apache.lucene.codecs.bloom.TestBloomFilteredLucenePostings.fieldsConsumer(TestBloomFilteredLucenePostings.java:66)
 at 
 org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:196)
 at 
 org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:107)
 at 
 org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:112)
 at 
 org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:419)
 at 
 org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:503)
 at 
 org.apache.lucene.index.DocumentsWriter.postUpdate(DocumentsWriter.java:373)
 at 
 org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:466)
 at 
 org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1415)
 at 
 org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1150)
 at 
 org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1135)
 at 
 org.apache.lucene.index.TestDemoParallelLeafReader.testRandomMultipleSchemaGensSameField(TestDemoParallelLeafReader.java:1076)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 

Re: [jira] [Commented] (SOLR-6954) Considering changing SolrClient#shutdown to SolrClient#close.

2015-01-11 Thread Mark Miller
+1 on CC too.
On Sat, Jan 10, 2015 at 3:41 PM Alan Woodward (JIRA) j...@apache.org
wrote:


 [ https://issues.apache.org/jira/browse/SOLR-6954?page=
 com.atlassian.jira.plugin.system.issuetabpanels:comment-
 tabpanelfocusedCommentId=14272687#comment-14272687 ]

 Alan Woodward commented on SOLR-6954:
 -

 +1.

 I'd suggest we just make shutdown() delegate to close() and deprecate it.

 Might be worth doing the same thing for CoreContainer?

  Considering changing SolrClient#shutdown to SolrClient#close.
  -
 
  Key: SOLR-6954
  URL: https://issues.apache.org/jira/browse/SOLR-6954
  Project: Solr
   Issue Type: Improvement
 Reporter: Mark Miller
  Fix For: 5.0, Trunk
 
 
  SolrClient#shutdown is not as odd as SolrServer#shutdown, but as we want
 users to release these objects, close is more standard and if we implement
 Closeable, tools help point out leaks.



 --
 This message was sent by Atlassian JIRA
 (v6.3.4#6332)

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-6787) API to manage blobs in Solr

2015-01-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272898#comment-14272898
 ] 

Noble Paul commented on SOLR-6787:
--

Mark
How do you know something is fully developed? You build a feature , add a test 
do manual testing and if everything is passing commit it . 
This did not fail any old test and it is only faking the new tests in Jenkins. 
So, I'm fixing what I think is the issue and checking in. Even the original 
commit does not have any obvious problem that I know of. Sometimes I'm trying a 
different approach . 
There are dozens of tests failing every day, are we going to remove all of 
them? 

 API to manage blobs in  Solr
 

 Key: SOLR-6787
 URL: https://issues.apache.org/jira/browse/SOLR-6787
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6787.patch, SOLR-6787.patch


 A special collection called .system needs to be created by the user to 
 store/manage blobs. The schema/solrconfig of that collection need to be 
 automatically supplied by the system so that there are no errors
 APIs need to be created to manage the content of that collection
 {code}
 #create your .system collection first
 http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2
 #The config for this collection is automatically created . numShards for this 
 collection is hardcoded to 1
 #create a new jar or add a new version of a jar
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent
 #  GET on the end point would give a list of jars and other details
 curl http://localhost:8983/solr/.system/blob 
 # GET on the end point with jar name would give  details of various versions 
 of the available jars
 curl http://localhost:8983/solr/.system/blob/mycomponent
 # GET on the end point with jar name and version with a wt=filestream to get 
 the actual file
 curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream  
 mycomponent.1.jar
 # GET on the end point with jar name and wt=filestream to get the latest 
 version of the file
 curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream  
 mycomponent.jar
 {code}
 Please note that the jars are never deleted. a new version is added to the 
 system everytime a new jar is posted for the name. You must use the standard 
 delete commands to delete the old entries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6787) API to manage blobs in Solr

2015-01-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272898#comment-14272898
 ] 

Noble Paul edited comment on SOLR-6787 at 1/11/15 1:55 PM:
---

Mark
How do you know something is fully developed? You build a feature , add a test 
do manual testing and if everything is passing commit it . 
This did not fail any old test and it is only failing the new tests in Jenkins. 
So, I'm fixing what I think is the issue and checking in. Even the original 
commit does not have any obvious problem that I know of. Sometimes I'm trying a 
different approach . 
There are dozens of tests failing every day, are we going to remove all of 
them? There were replication issues due to which tests were failing and it 
could have been a reason for this failure too


was (Author: noble.paul):
Mark
How do you know something is fully developed? You build a feature , add a test 
do manual testing and if everything is passing commit it . 
This did not fail any old test and it is only failing the new tests in Jenkins. 
So, I'm fixing what I think is the issue and checking in. Even the original 
commit does not have any obvious problem that I know of. Sometimes I'm trying a 
different approach . 
There are dozens of tests failing every day, are we going to remove all of 
them? 

 API to manage blobs in  Solr
 

 Key: SOLR-6787
 URL: https://issues.apache.org/jira/browse/SOLR-6787
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6787.patch, SOLR-6787.patch


 A special collection called .system needs to be created by the user to 
 store/manage blobs. The schema/solrconfig of that collection need to be 
 automatically supplied by the system so that there are no errors
 APIs need to be created to manage the content of that collection
 {code}
 #create your .system collection first
 http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2
 #The config for this collection is automatically created . numShards for this 
 collection is hardcoded to 1
 #create a new jar or add a new version of a jar
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent
 #  GET on the end point would give a list of jars and other details
 curl http://localhost:8983/solr/.system/blob 
 # GET on the end point with jar name would give  details of various versions 
 of the available jars
 curl http://localhost:8983/solr/.system/blob/mycomponent
 # GET on the end point with jar name and version with a wt=filestream to get 
 the actual file
 curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream  
 mycomponent.1.jar
 # GET on the end point with jar name and wt=filestream to get the latest 
 version of the file
 curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream  
 mycomponent.jar
 {code}
 Please note that the jars are never deleted. a new version is added to the 
 system everytime a new jar is posted for the name. You must use the standard 
 delete commands to delete the old entries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6940) Query UI in admin should support other facet options

2015-01-11 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272909#comment-14272909
 ] 

Upayavira commented on SOLR-6940:
-

If you're interested, I could do the 'query' tab as a part of SOLR-5507 in 
Angular, and you could extend it there. The Query tab shouldn't be a huge 
amount of work.



 Query UI in admin should support other facet options
 

 Key: SOLR-6940
 URL: https://issues.apache.org/jira/browse/SOLR-6940
 Project: Solr
  Issue Type: Improvement
  Components: web gui
Reporter: Grant Ingersoll

 As of right now in the Admin Query UI, you can only easily provide facet 
 options for field, query and prefix.  It would be nice to have easy to use 
 options for pivots, ranges, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6787) API to manage blobs in Solr

2015-01-11 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272898#comment-14272898
 ] 

Noble Paul edited comment on SOLR-6787 at 1/11/15 1:41 PM:
---

Mark
How do you know something is fully developed? You build a feature , add a test 
do manual testing and if everything is passing commit it . 
This did not fail any old test and it is only failing the new tests in Jenkins. 
So, I'm fixing what I think is the issue and checking in. Even the original 
commit does not have any obvious problem that I know of. Sometimes I'm trying a 
different approach . 
There are dozens of tests failing every day, are we going to remove all of 
them? 


was (Author: noble.paul):
Mark
How do you know something is fully developed? You build a feature , add a test 
do manual testing and if everything is passing commit it . 
This did not fail any old test and it is only faking the new tests in Jenkins. 
So, I'm fixing what I think is the issue and checking in. Even the original 
commit does not have any obvious problem that I know of. Sometimes I'm trying a 
different approach . 
There are dozens of tests failing every day, are we going to remove all of 
them? 

 API to manage blobs in  Solr
 

 Key: SOLR-6787
 URL: https://issues.apache.org/jira/browse/SOLR-6787
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6787.patch, SOLR-6787.patch


 A special collection called .system needs to be created by the user to 
 store/manage blobs. The schema/solrconfig of that collection need to be 
 automatically supplied by the system so that there are no errors
 APIs need to be created to manage the content of that collection
 {code}
 #create your .system collection first
 http://localhost:8983/solr/admin/collections?action=CREATEname=.systemreplicationFactor=2
 #The config for this collection is automatically created . numShards for this 
 collection is hardcoded to 1
 #create a new jar or add a new version of a jar
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @mycomponent.jar http://localhost:8983/solr/.system/blob/mycomponent
 #  GET on the end point would give a list of jars and other details
 curl http://localhost:8983/solr/.system/blob 
 # GET on the end point with jar name would give  details of various versions 
 of the available jars
 curl http://localhost:8983/solr/.system/blob/mycomponent
 # GET on the end point with jar name and version with a wt=filestream to get 
 the actual file
 curl http://localhost:8983/solr/.system/blob/mycomponent/1?wt=filestream  
 mycomponent.1.jar
 # GET on the end point with jar name and wt=filestream to get the latest 
 version of the file
 curl http://localhost:8983/solr/.system/blob/mycomponent?wt=filestream  
 mycomponent.jar
 {code}
 Please note that the jars are never deleted. a new version is added to the 
 system everytime a new jar is posted for the name. You must use the standard 
 delete commands to delete the old entries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-4580) Support for protecting content in ZK

2015-01-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4580:
--
Comment: was deleted

(was: Hey [~shalinmangar], it seems that the assert you added in 
updateClusterState can be tripped in CollectionsAPIDistributedZkTest.

{noformat}
Error from server at http://127.0.0.1:40942: Expected mime type 
application/octet-stream but got text/html. html head meta 
http-equiv=Content-Type content=text/html;charset=ISO-8859-1/ titleError 
500 {trace=java.lang.AssertionError  at 
org.apache.solr.common.cloud.ZkStateReader.updateClusterState(ZkStateReader.java:532)
  at 
org.apache.solr.common.cloud.ZkStateReader.updateClusterState(ZkStateReader.java:255)
  at 
org.apache.solr.common.cloud.ZkStateReader.removeZKWatch(ZkStateReader.java:900)
  at org.apache.solr.cloud.ZkController.unregister(ZkController.java:1218)  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:590)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
  at 
{noformat})

 Support for protecting content in ZK
 

 Key: SOLR-4580
 URL: https://issues.apache.org/jira/browse/SOLR-4580
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Affects Versions: 4.2
Reporter: Per Steffensen
Assignee: Mark Miller
  Labels: security, solr, zookeeper
 Fix For: 5.0, Trunk

 Attachments: SOLR-4580.patch, SOLR-4580.patch, SOLR-4580.patch, 
 SOLR-4580_branch_4x_r1482255.patch


 We want to protect content in zookeeper. 
 In order to run a CloudSolrServer in client-space you will have to open for 
 access to zookeeper from client-space. 
 If you do not trust persons or systems in client-space you want to protect 
 zookeeper against evilness from client-space - e.g.
 * Changing configuration
 * Trying to mess up system by manipulating clusterstate
 * Add a delete-collection job to be carried out by the Overseer
 * etc
 Even if you do not open for zookeeper access to someone outside your secure 
 zone you might want to protect zookeeper content from being manipulated by 
 e.g.
 * Malware that found its way into secure zone
 * Other systems also using zookeeper
 * etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-6880) ZKStateReader makes a call to updateWatchedCollection, which doesn't accept null with a method creating the argument that can return null.

2015-01-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reopened SOLR-6880:
---

Hey [~shalinmangar], it seems that the assert you added in updateClusterState 
can be tripped in CollectionsAPIDistributedZkTest.

{noformat}
Error from server at http://127.0.0.1:40942: Expected mime type 
application/octet-stream but got text/html. html head meta 
http-equiv=Content-Type content=text/html;charset=ISO-8859-1/ titleError 
500 {trace=java.lang.AssertionError  at 
org.apache.solr.common.cloud.ZkStateReader.updateClusterState(ZkStateReader.java:532)
  at 
org.apache.solr.common.cloud.ZkStateReader.updateClusterState(ZkStateReader.java:255)
  at 
org.apache.solr.common.cloud.ZkStateReader.removeZKWatch(ZkStateReader.java:900)
  at org.apache.solr.cloud.ZkController.unregister(ZkController.java:1218)  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:590)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
  at 
{noformat}

 ZKStateReader makes a call to updateWatchedCollection, which doesn't accept 
 null with a method creating the argument that can return null.
 --

 Key: SOLR-6880
 URL: https://issues.apache.org/jira/browse/SOLR-6880
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6880.patch, SOLR-6880.patch


 I've seen the resulting NPE in tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4580) Support for protecting content in ZK

2015-01-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-4580.
---
Resolution: Fixed

Whoops, firefox playing tricks on me.

 Support for protecting content in ZK
 

 Key: SOLR-4580
 URL: https://issues.apache.org/jira/browse/SOLR-4580
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Affects Versions: 4.2
Reporter: Per Steffensen
Assignee: Mark Miller
  Labels: security, solr, zookeeper
 Fix For: 5.0, Trunk

 Attachments: SOLR-4580.patch, SOLR-4580.patch, SOLR-4580.patch, 
 SOLR-4580_branch_4x_r1482255.patch


 We want to protect content in zookeeper. 
 In order to run a CloudSolrServer in client-space you will have to open for 
 access to zookeeper from client-space. 
 If you do not trust persons or systems in client-space you want to protect 
 zookeeper against evilness from client-space - e.g.
 * Changing configuration
 * Trying to mess up system by manipulating clusterstate
 * Add a delete-collection job to be carried out by the Overseer
 * etc
 Even if you do not open for zookeeper access to someone outside your secure 
 zone you might want to protect zookeeper content from being manipulated by 
 e.g.
 * Malware that found its way into secure zone
 * Other systems also using zookeeper
 * etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6941) DistributedQueue#containsTaskWithRequestId can fail with NPE.

2015-01-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272931#comment-14272931
 ] 

Mark Miller commented on SOLR-6941:
---

Odd - I also see failures with this leader rebalance test where the data is 
there but throwing an assert when our UTF-8 code tries to parse it.

 DistributedQueue#containsTaskWithRequestId can fail with NPE.
 -

 Key: SOLR-6941
 URL: https://issues.apache.org/jira/browse/SOLR-6941
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6941.patch


 I've seen this happen some recently. Seems data can be return as null and we 
 need to guard against it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6952) Copying data-driven configsets by default is not helpful

2015-01-11 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6952:


Assignee: Timothy Potter

 Copying data-driven configsets by default is not helpful
 

 Key: SOLR-6952
 URL: https://issues.apache.org/jira/browse/SOLR-6952
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 5.0
Reporter: Grant Ingersoll
Assignee: Timothy Potter
 Fix For: 5.0


 When creating collections (I'm using the bin/solr scripts), I don't think we 
 should automatically copy configsets, especially when running in getting 
 started mode or data driven mode.
 I did the following:
 {code}
 bin/solr create_collection -n foo
 bin/post foo some_data.csv
 {code}
 I then created a second collection with the intention of sending in the same 
 data, but this time run through a python script that changed a value from an 
 int to a string (since it was an enumerated type) and was surprised to see 
 that I got:
 {quote}
 Caused by: java.lang.NumberFormatException: For input string: NA
   at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
   at java.lang.Long.parseLong(Long.java:441)
 {quote}
 for my new version of the data that passes in a string instead of an int, as 
 this new collection had only seen strings for that field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6946) create_core should accept the port as an optional param

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272956#comment-14272956
 ] 

ASF subversion and git services commented on SOLR-6946:
---

Commit 1650912 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1650912 ]

SOLR-6946: Document -p port option for create_core and create_collection 
actions in bin/solr

 create_core should accept the port as an optional param
 ---

 Key: SOLR-6946
 URL: https://issues.apache.org/jira/browse/SOLR-6946
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Anshum Gupta
Assignee: Timothy Potter
Priority: Critical

 While documenting legacy distributed search, for the purpose of an example, I 
 wanted to start 2 instances on the same machine in standalone mode with a 
 core each and the same config set.
 Here's what I did to start the 2 nodes:
 {code}
 bin/solr start -s example/nodes/node1 -p 8983
 bin/solr start -s example/nodes/node2 -p 8984 
 {code}
 So far so good. Now, create_core doesn't accept a port number and so it 
 pseudo-randomly picks a node to create the core i.e. I can't create a core 
 using scripts on both nodes smoothly unless we support -p port number 
 with that call (and may be collection too?).
 FYI, I also tried :
 {code}
 bin/solr start -s example/nodes/node1 -p 8983 -e techproducts
 bin/solr start -s example/nodes/node2 -p 8984 -e techproducts
 {code}
 but this failed as -e overrides -s. I don't really remember why we did that, 
 but perhaps we can consider not overriding -s, even when -e is specified i.e. 
 copy whatever is required and use -s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6951) Add basic testing for test ObjectReleaseTracker.

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272970#comment-14272970
 ] 

ASF subversion and git services commented on SOLR-6951:
---

Commit 1650916 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1650916 ]

SOLR-6951: Add basic testing for test ObjectReleaseTracker.

 Add basic testing for test ObjectReleaseTracker.
 

 Key: SOLR-6951
 URL: https://issues.apache.org/jira/browse/SOLR-6951
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-6951.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6952) Copying data-driven configsets by default is not helpful

2015-01-11 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272978#comment-14272978
 ] 

Timothy Potter commented on SOLR-6952:
--

How should the user specify they want to reuse a config that already exists in 
ZooKeeper instead of creating a new config in ZK by copying the template? The 
default behavior will copy the template and name the config the same name as 
the collection in ZK. Maybe something like a -sharedConfig option?

{code}
bin/solr create_collection -n foo -sharedConfig data_driven_schema_configs
{code}

This means to use the data_driven_schema_configs as-is in ZooKeeper and not 
copy it to a new config directory. I like making the shared concept explicit 
in the param / help for the command but open to other approaches too.



 Copying data-driven configsets by default is not helpful
 

 Key: SOLR-6952
 URL: https://issues.apache.org/jira/browse/SOLR-6952
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 5.0
Reporter: Grant Ingersoll
Assignee: Timothy Potter
 Fix For: 5.0


 When creating collections (I'm using the bin/solr scripts), I don't think we 
 should automatically copy configsets, especially when running in getting 
 started mode or data driven mode.
 I did the following:
 {code}
 bin/solr create_collection -n foo
 bin/post foo some_data.csv
 {code}
 I then created a second collection with the intention of sending in the same 
 data, but this time run through a python script that changed a value from an 
 int to a string (since it was an enumerated type) and was surprised to see 
 that I got:
 {quote}
 Caused by: java.lang.NumberFormatException: For input string: NA
   at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
   at java.lang.Long.parseLong(Long.java:441)
 {quote}
 for my new version of the data that passes in a string instead of an int, as 
 this new collection had only seen strings for that field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-4580) Support for protecting content in ZK

2015-01-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reopened SOLR-4580:
---

Hey [~shalinmangar], it seems that the assert you added in updateClusterState 
can be tripped in CollectionsAPIDistributedZkTest.

{noformat}
Error from server at http://127.0.0.1:40942: Expected mime type 
application/octet-stream but got text/html. html head meta 
http-equiv=Content-Type content=text/html;charset=ISO-8859-1/ titleError 
500 {trace=java.lang.AssertionError  at 
org.apache.solr.common.cloud.ZkStateReader.updateClusterState(ZkStateReader.java:532)
  at 
org.apache.solr.common.cloud.ZkStateReader.updateClusterState(ZkStateReader.java:255)
  at 
org.apache.solr.common.cloud.ZkStateReader.removeZKWatch(ZkStateReader.java:900)
  at org.apache.solr.cloud.ZkController.unregister(ZkController.java:1218)  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:590)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
  at 
{noformat}

 Support for protecting content in ZK
 

 Key: SOLR-4580
 URL: https://issues.apache.org/jira/browse/SOLR-4580
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Affects Versions: 4.2
Reporter: Per Steffensen
Assignee: Mark Miller
  Labels: security, solr, zookeeper
 Fix For: 5.0, Trunk

 Attachments: SOLR-4580.patch, SOLR-4580.patch, SOLR-4580.patch, 
 SOLR-4580_branch_4x_r1482255.patch


 We want to protect content in zookeeper. 
 In order to run a CloudSolrServer in client-space you will have to open for 
 access to zookeeper from client-space. 
 If you do not trust persons or systems in client-space you want to protect 
 zookeeper against evilness from client-space - e.g.
 * Changing configuration
 * Trying to mess up system by manipulating clusterstate
 * Add a delete-collection job to be carried out by the Overseer
 * etc
 Even if you do not open for zookeeper access to someone outside your secure 
 zone you might want to protect zookeeper content from being manipulated by 
 e.g.
 * Malware that found its way into secure zone
 * Other systems also using zookeeper
 * etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6946) create_core should accept the port as an optional param

2015-01-11 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6946:


Assignee: Timothy Potter

 create_core should accept the port as an optional param
 ---

 Key: SOLR-6946
 URL: https://issues.apache.org/jira/browse/SOLR-6946
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Anshum Gupta
Assignee: Timothy Potter
Priority: Critical

 While documenting legacy distributed search, for the purpose of an example, I 
 wanted to start 2 instances on the same machine in standalone mode with a 
 core each and the same config set.
 Here's what I did to start the 2 nodes:
 {code}
 bin/solr start -s example/nodes/node1 -p 8983
 bin/solr start -s example/nodes/node2 -p 8984 
 {code}
 So far so good. Now, create_core doesn't accept a port number and so it 
 pseudo-randomly picks a node to create the core i.e. I can't create a core 
 using scripts on both nodes smoothly unless we support -p port number 
 with that call (and may be collection too?).
 FYI, I also tried :
 {code}
 bin/solr start -s example/nodes/node1 -p 8983 -e techproducts
 bin/solr start -s example/nodes/node2 -p 8984 -e techproducts
 {code}
 but this failed as -e overrides -s. I don't really remember why we did that, 
 but perhaps we can consider not overriding -s, even when -e is specified i.e. 
 copy whatever is required and use -s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_25) - Build # 4305 - Still Failing!

2015-01-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4305/
Java: 32bit/jdk1.8.0_25 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestSolrConfigHandler

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007\collection1\conf\configoverlay.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007\collection1\conf\configoverlay.json: The 
process cannot access the file because it is being used by another process. 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007\collection1\conf
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007\collection1
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007\collection1\conf\configoverlay.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007\collection1\conf\configoverlay.json: The 
process cannot access the file because it is being used by another process.

   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007\collection1\conf
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007\collection1
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 AEB6A8A3B7A17B26-001\tempDir-007

at __randomizedtesting.SeedInfo.seed([AEB6A8A3B7A17B26]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:294)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:170)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  

[jira] [Commented] (SOLR-6951) Add basic testing for test ObjectReleaseTracker.

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272971#comment-14272971
 ] 

ASF subversion and git services commented on SOLR-6951:
---

Commit 1650917 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650917 ]

SOLR-6951: Add basic testing for test ObjectReleaseTracker.

 Add basic testing for test ObjectReleaseTracker.
 

 Key: SOLR-6951
 URL: https://issues.apache.org/jira/browse/SOLR-6951
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-6951.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6952) Copying data-driven configsets by default is not helpful

2015-01-11 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272978#comment-14272978
 ] 

Timothy Potter edited comment on SOLR-6952 at 1/11/15 5:23 PM:
---

How should the user specify they want to reuse a config that already exists in 
ZooKeeper instead of creating a new config in ZK by copying the template? The 
default behavior will copy the template and name the config the same name as 
the collection in ZK. Maybe something like a -sharedConfig option?

{code}
bin/solr create_collection -n foo -sharedConfig data_driven_schema_configs
{code}

This means to use the data_driven_schema_configs as-is in ZooKeeper and not 
copy it to a new config directory. I like making the shared concept explicit 
in the param / help for the command but open to other approaches too.

Alternatively, we can change the interface to create_collection / create_core 
to use a -t parameter (t for template) and then make the -c optional, giving us:

Example 1:
{code}
bin/solr create_collection -n foo -t data_driven_schema_configs
{code}

Result will be to copy the data_driven_schema_configs directory to ZooKeeper as 
/configs/foo

Example 2:
{code}
bin/solr create_collection -n foo -t data_driven_schema_configs -c shared
{code}

Result will be to copy the data_driven_schema_configs directory to ZooKeeper as 
/configs/shared

Of course, if /configs/shared already exists, then it will be used without 
uploading anything new ...


was (Author: thelabdude):
How should the user specify they want to reuse a config that already exists in 
ZooKeeper instead of creating a new config in ZK by copying the template? The 
default behavior will copy the template and name the config the same name as 
the collection in ZK. Maybe something like a -sharedConfig option?

{code}
bin/solr create_collection -n foo -sharedConfig data_driven_schema_configs
{code}

This means to use the data_driven_schema_configs as-is in ZooKeeper and not 
copy it to a new config directory. I like making the shared concept explicit 
in the param / help for the command but open to other approaches too.



 Copying data-driven configsets by default is not helpful
 

 Key: SOLR-6952
 URL: https://issues.apache.org/jira/browse/SOLR-6952
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 5.0
Reporter: Grant Ingersoll
Assignee: Timothy Potter
 Fix For: 5.0


 When creating collections (I'm using the bin/solr scripts), I don't think we 
 should automatically copy configsets, especially when running in getting 
 started mode or data driven mode.
 I did the following:
 {code}
 bin/solr create_collection -n foo
 bin/post foo some_data.csv
 {code}
 I then created a second collection with the intention of sending in the same 
 data, but this time run through a python script that changed a value from an 
 int to a string (since it was an enumerated type) and was surprised to see 
 that I got:
 {quote}
 Caused by: java.lang.NumberFormatException: For input string: NA
   at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
   at java.lang.Long.parseLong(Long.java:441)
 {quote}
 for my new version of the data that passes in a string instead of an int, as 
 this new collection had only seen strings for that field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread Alexandre Rafalovitch (JIRA)
Alexandre Rafalovitch created SOLR-6959:
---

 Summary: SimplePostTool reports incorrect base url for PDFs
 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Priority: Minor


{quote}
$ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
SimplePostTool version 1.5
Posting files to base url http://localhost:8983/solr/techproducts/update..
{quote}

This command will *not* post to */update*, it will post to */update/extract*. 
This should be reported correspondingly.

From the server log:
{quote}
127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
/solr/techproducts/update/extract?resource.name=
{quote}

It would make sense for that message to be after the auto-mode determination 
just before the actual POST.

Also, what's with two dots after the url? If it is _etc_, it should probably be 
three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_25) - Build # 4408 - Still Failing!

2015-01-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4408/
Java: 64bit/jdk1.8.0_25 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: 
http://127.0.0.1:64317/b_hd/i/repfacttest_c8n_1x3_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: 
http://127.0.0.1:64317/b_hd/i/repfacttest_c8n_1x3_shard1_replica2
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:277)
at 
org.apache.solr.cloud.ReplicationFactorTest.doTest(ReplicationFactorTest.java:123)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[jira] [Updated] (SOLR-6958) per field basis capabilities for SolrQuery

2015-01-11 Thread Leonardo de Lima Oliveira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonardo de Lima Oliveira updated SOLR-6958:

Affects Version/s: Trunk

 per field basis capabilities for SolrQuery
 

 Key: SOLR-6958
 URL: https://issues.apache.org/jira/browse/SOLR-6958
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Affects Versions: 4.10.3, Trunk
Reporter: Leonardo de Lima Oliveira
Priority: Minor
 Attachments: SOLR-6958.patch


 Many parameters defined in FacetParams,  HighlightParams, etc.  can be 
 specified on a per field basis/overrides (e.g., facet.my_field1.offset = 10 
 hl.my_field1.snippets = 2 , etc). SolrQuery already supports per field 
 overrides for FacetParams.FACET_PREFIX , but for many other parameters this 
 feature does not exist. This patch  standardizes  the definition of 
 parameters for field , remove hardcoded string referenced by 
 FacetParams.FACET_INTERVAL_SET and removes per field basis parameter when the 
 main parameter of SearchComponent is disabled ( FacetParams.facet =  false , 
 TermsParams.TERMS =  false , etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6958) per field basis capabilities for SolrQuery

2015-01-11 Thread Leonardo de Lima Oliveira (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonardo de Lima Oliveira updated SOLR-6958:

Affects Version/s: 4.10.3

 per field basis capabilities for SolrQuery
 

 Key: SOLR-6958
 URL: https://issues.apache.org/jira/browse/SOLR-6958
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Affects Versions: 4.10.3, Trunk
Reporter: Leonardo de Lima Oliveira
Priority: Minor
 Attachments: SOLR-6958.patch


 Many parameters defined in FacetParams,  HighlightParams, etc.  can be 
 specified on a per field basis/overrides (e.g., facet.my_field1.offset = 10 
 hl.my_field1.snippets = 2 , etc). SolrQuery already supports per field 
 overrides for FacetParams.FACET_PREFIX , but for many other parameters this 
 feature does not exist. This patch  standardizes  the definition of 
 parameters for field , remove hardcoded string referenced by 
 FacetParams.FACET_INTERVAL_SET and removes per field basis parameter when the 
 main parameter of SearchComponent is disabled ( FacetParams.facet =  false , 
 TermsParams.TERMS =  false , etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2464 - Still Failing

2015-01-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2464/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
__randomizedtesting.SeedInfo.seed([E3922FC18B890659:6274A1D9FCD5]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Updated] (SOLR-6920) During replication use checksums to verify if files are the same

2015-01-11 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6920:

Attachment: SOLR-6920.patch

Updated patch. Tests pass.

The difference from the earlier patch is the API usage of SegmentInfos

For reference, here is the link to the question I asked a question on the 
lucene user mailing list about the SegmentInfos API usage - 
http://mail-archives.apache.org/mod_mbox/lucene-java-user/201501.mbox/%3CCAEH2wZDm%2BEXEhWEyp9RoQDVffb7jJSG31A3WVGxV_TNCE%3D12zA%40mail.gmail.com%3E

 During replication use checksums to verify if files are the same
 

 Key: SOLR-6920
 URL: https://issues.apache.org/jira/browse/SOLR-6920
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Varun Thacker
 Attachments: SOLR-6920.patch, SOLR-6920.patch


 Currently we check if an index file on the master and slave is the same by 
 checking if it's name and file length match. 
 With LUCENE-2446 we now have a checksums for each index file in the segment. 
 We should leverage this to verify if two files are the same.
 Places like SnapPuller.isIndexStale and SnapPuller.downloadIndexFiles should 
 check against the checksum also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6923) kill -9 doesn't change the replica state in clusterstate.json

2015-01-11 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272953#comment-14272953
 ] 

Varun Thacker commented on SOLR-6923:
-

Thanks Tim for pointing it out. I was not aware of this.

I'll rename the issue appropriately with this information and come up up with a 
patch for AutoAddReplicas to consult live nodes too.

 kill -9 doesn't change the replica state in clusterstate.json
 -

 Key: SOLR-6923
 URL: https://issues.apache.org/jira/browse/SOLR-6923
 Project: Solr
  Issue Type: Bug
Reporter: Varun Thacker

 - I did the following 
 {code}
 ./solr start -e cloud -noprompt
 kill -9 pid-of-node2 //Not the node which is running ZK
 {code}
 - /live_nodes reflects that the node is gone.
 - This is the only message which gets logged on the node1 server after 
 killing node2
 {code}
 45812 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983] WARN  
 org.apache.zookeeper.server.NIOServerCnxn  – caught end of stream exception
 EndOfStreamException: Unable to read additional data from client sessionid 
 0x14ac40f26660001, likely client has closed socket
 at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
 at 
 org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
 at java.lang.Thread.run(Thread.java:745)
 {code}
 - The graph shows the node2 as 'Gone' state
 - clusterstate.json keeps showing the replica as 'active'
 {code}
 {collection1:{
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   core_node1:{
 state:active,
 core:collection1,
 node_name:169.254.113.194:8983_solr,
 base_url:http://169.254.113.194:8983/solr;,
 leader:true},
   core_node2:{
 state:active,
 core:collection1,
 node_name:169.254.113.194:8984_solr,
 base_url:http://169.254.113.194:8984/solr,
 maxShardsPerNode:1,
 router:{name:compositeId},
 replicationFactor:1,
 autoAddReplicas:false,
 autoCreated:true}}
 {code}
 One immediate problem I can see is that AutoAddReplicas doesn't work since 
 the clusterstate.json never changes. There might be more features which are 
 affected by this.
 On first thought I think we can handle this - The shard leader could listen 
 to changes on /live_nodes and if it has replicas that were on that node, mark 
 it as 'down' in the clusterstate.json?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6923) AutoAddReplicas should consult live nodes also to see if a state has changed

2015-01-11 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6923:

Summary: AutoAddReplicas should consult live nodes also to see if a state 
has changed  (was: kill -9 doesn't change the replica state in 
clusterstate.json)

 AutoAddReplicas should consult live nodes also to see if a state has changed
 

 Key: SOLR-6923
 URL: https://issues.apache.org/jira/browse/SOLR-6923
 Project: Solr
  Issue Type: Bug
Reporter: Varun Thacker

 - I did the following 
 {code}
 ./solr start -e cloud -noprompt
 kill -9 pid-of-node2 //Not the node which is running ZK
 {code}
 - /live_nodes reflects that the node is gone.
 - This is the only message which gets logged on the node1 server after 
 killing node2
 {code}
 45812 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983] WARN  
 org.apache.zookeeper.server.NIOServerCnxn  – caught end of stream exception
 EndOfStreamException: Unable to read additional data from client sessionid 
 0x14ac40f26660001, likely client has closed socket
 at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
 at 
 org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
 at java.lang.Thread.run(Thread.java:745)
 {code}
 - The graph shows the node2 as 'Gone' state
 - clusterstate.json keeps showing the replica as 'active'
 {code}
 {collection1:{
 shards:{shard1:{
 range:8000-7fff,
 state:active,
 replicas:{
   core_node1:{
 state:active,
 core:collection1,
 node_name:169.254.113.194:8983_solr,
 base_url:http://169.254.113.194:8983/solr;,
 leader:true},
   core_node2:{
 state:active,
 core:collection1,
 node_name:169.254.113.194:8984_solr,
 base_url:http://169.254.113.194:8984/solr,
 maxShardsPerNode:1,
 router:{name:compositeId},
 replicationFactor:1,
 autoAddReplicas:false,
 autoCreated:true}}
 {code}
 One immediate problem I can see is that AutoAddReplicas doesn't work since 
 the clusterstate.json never changes. There might be more features which are 
 affected by this.
 On first thought I think we can handle this - The shard leader could listen 
 to changes on /live_nodes and if it has replicas that were on that node, mark 
 it as 'down' in the clusterstate.json?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6946) create_core should accept the port as an optional param

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272960#comment-14272960
 ] 

ASF subversion and git services commented on SOLR-6946:
---

Commit 1650915 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650915 ]

SOLR-6946: Document -p port option for create_core and create_collection 
actions in bin/solr

 create_core should accept the port as an optional param
 ---

 Key: SOLR-6946
 URL: https://issues.apache.org/jira/browse/SOLR-6946
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Anshum Gupta
Assignee: Timothy Potter
Priority: Critical

 While documenting legacy distributed search, for the purpose of an example, I 
 wanted to start 2 instances on the same machine in standalone mode with a 
 core each and the same config set.
 Here's what I did to start the 2 nodes:
 {code}
 bin/solr start -s example/nodes/node1 -p 8983
 bin/solr start -s example/nodes/node2 -p 8984 
 {code}
 So far so good. Now, create_core doesn't accept a port number and so it 
 pseudo-randomly picks a node to create the core i.e. I can't create a core 
 using scripts on both nodes smoothly unless we support -p port number 
 with that call (and may be collection too?).
 FYI, I also tried :
 {code}
 bin/solr start -s example/nodes/node1 -p 8983 -e techproducts
 bin/solr start -s example/nodes/node2 -p 8984 -e techproducts
 {code}
 but this failed as -e overrides -s. I don't really remember why we did that, 
 but perhaps we can consider not overriding -s, even when -e is specified i.e. 
 copy whatever is required and use -s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6946) create_core should accept the port as an optional param

2015-01-11 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6946.
--
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

 create_core should accept the port as an optional param
 ---

 Key: SOLR-6946
 URL: https://issues.apache.org/jira/browse/SOLR-6946
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Anshum Gupta
Assignee: Timothy Potter
Priority: Critical
 Fix For: 5.0, Trunk


 While documenting legacy distributed search, for the purpose of an example, I 
 wanted to start 2 instances on the same machine in standalone mode with a 
 core each and the same config set.
 Here's what I did to start the 2 nodes:
 {code}
 bin/solr start -s example/nodes/node1 -p 8983
 bin/solr start -s example/nodes/node2 -p 8984 
 {code}
 So far so good. Now, create_core doesn't accept a port number and so it 
 pseudo-randomly picks a node to create the core i.e. I can't create a core 
 using scripts on both nodes smoothly unless we support -p port number 
 with that call (and may be collection too?).
 FYI, I also tried :
 {code}
 bin/solr start -s example/nodes/node1 -p 8983 -e techproducts
 bin/solr start -s example/nodes/node2 -p 8984 -e techproducts
 {code}
 but this failed as -e overrides -s. I don't really remember why we did that, 
 but perhaps we can consider not overriding -s, even when -e is specified i.e. 
 copy whatever is required and use -s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6954) Considering changing SolrClient#shutdown to SolrClient#close.

2015-01-11 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272962#comment-14272962
 ] 

Uwe Schindler commented on SOLR-6954:
-

+1, especially because of warnings in IDEs! :-)

 Considering changing SolrClient#shutdown to SolrClient#close.
 -

 Key: SOLR-6954
 URL: https://issues.apache.org/jira/browse/SOLR-6954
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
 Fix For: 5.0, Trunk


 SolrClient#shutdown is not as odd as SolrServer#shutdown, but as we want 
 users to release these objects, close is more standard and if we implement 
 Closeable, tools help point out leaks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6607) Managing requesthandlers through API

2015-01-11 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273017#comment-14273017
 ] 

Alexandre Rafalovitch commented on SOLR-6607:
-

What happened with the *_meta_* implementation for the ground truth?

Now that */update* is in the explicit configuration, I cannot seem to find that 
indicated when I do *curl 
http://localhost:8983/solr/techproducts/config/requestHandler*

I talked about the *ground truth* in this issue on the November 28th. I can't 
find what addresses it. Is there a different JIRA that I am missing perhaps? 
Without the ground truth it is very hard to debug things.

 Managing requesthandlers through API
 

 Key: SOLR-6607
 URL: https://issues.apache.org/jira/browse/SOLR-6607
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, Trunk

 Attachments: SOLR-6607.patch


 The concept of solrconfig editing is split into multiple pieces . This issue 
 is about registering components and uploading binaries through an API.
 This supports multiple operations
  * commands  'create-requesthandler', 
 update-requesthandler,delete-requesthandler  which can set the 
 configuration of a component . This configuration will be saved inside the 
 configoverlay.json
 The components has to be available in the classpath of all nodes. 
 example for registering a component
 {code}
 curl http://localhost:8983/solr/collection1/config -H 
 'Content-type:application/json'  -d '{
 create-requesthandler : {name: /mypath ,
   class:com.mycomponent.ClassName 
 , 
defaults:{x:y , a:b},
useParams:x
  },
 update-requesthandler :{name: /mypath ,

 class:com.mycomponent.ClassName ,
useParams:y ,
defaults:{x:y , a:b}
  },
 delete-requesthandler :/mypath 
 }'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6960) Config reporting handler is missing initParams defaults

2015-01-11 Thread Alexandre Rafalovitch (JIRA)
Alexandre Rafalovitch created SOLR-6960:
---

 Summary: Config reporting handler is missing initParams defaults
 Key: SOLR-6960
 URL: https://issues.apache.org/jira/browse/SOLR-6960
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
 Fix For: 5.0


*curl http://localhost:8983/solr/techproducts/config/requestHandler* produces 
(fragments):
{quote}
  /update:{
name:/update,
class:org.apache.solr.handler.UpdateRequestHandler,
defaults:{}},
  /update/json/docs:{
name:/update/json/docs,
class:org.apache.solr.handler.UpdateRequestHandler,
defaults:{
  update.contentType:application/json,
  json.command:false}},
{quote}

Where are the defaults from initParams:
{quote}
initParams path=/update/**,/query,/select,/tvrh,/elevate,/spell,/browse
lst name=defaults
  str name=dftext/str
/lst
/initParams

  initParams path=/update/json/docs
lst name=defaults
  str name=srcField\_src_/str
  str name=mapUniqueKeyOnlytrue/str
/lst
  /initParams
{quote}

Obviously, a test is missing as well to catch this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6961) CloudSolrClientTest#stateVersionParamTest Failure.

2015-01-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273034#comment-14273034
 ] 

Mark Miller commented on SOLR-6961:
---

NOTE: reproduce with: ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testDistribSearch -Dtests.seed=4C1D87B132BABE4A 
-Dtests.slow=true -Dtests.locale=ar -Dtests.timezone=America/Campo_Grande 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

 CloudSolrClientTest#stateVersionParamTest Failure.
 --

 Key: SOLR-6961
 URL: https://issues.apache.org/jira/browse/SOLR-6961
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller

 {noformat}
 Error Message
 Error from server at http://127.0.0.1:35638/kdj/d/checkStateVerCol: STATE 
 STALE: checkStateVerCol:23valid : false
 Stacktrace
 org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
 from server at http://127.0.0.1:35638/kdj/d/checkStateVerCol: STATE STALE: 
 checkStateVerCol:23valid : false
   at 
 __randomizedtesting.SeedInfo.seed([4C1D87B132BABE4A:CDFB09A945E5DE76]:0)
   at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:558)
   at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214)
   at 
 org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210)
   at 
 org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
   at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:302)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClientTest.stateVersionParamTest(CloudSolrClientTest.java:422)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClientTest.doTest(CloudSolrClientTest.java:126)
   at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6961) CloudSolrClientTest#stateVersionParamTest Failure.

2015-01-11 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6961:
-

 Summary: CloudSolrClientTest#stateVersionParamTest Failure.
 Key: SOLR-6961
 URL: https://issues.apache.org/jira/browse/SOLR-6961
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller


{noformat}
Error Message

Error from server at http://127.0.0.1:35638/kdj/d/checkStateVerCol: STATE 
STALE: checkStateVerCol:23valid : false

Stacktrace

org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:35638/kdj/d/checkStateVerCol: STATE STALE: 
checkStateVerCol:23valid : false
at 
__randomizedtesting.SeedInfo.seed([4C1D87B132BABE4A:CDFB09A945E5DE76]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:558)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:302)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.stateVersionParamTest(CloudSolrClientTest.java:422)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.doTest(CloudSolrClientTest.java:126)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6951) Add basic testing for test ObjectReleaseTracker.

2015-01-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6951.
---
Resolution: Fixed

 Add basic testing for test ObjectReleaseTracker.
 

 Key: SOLR-6951
 URL: https://issues.apache.org/jira/browse/SOLR-6951
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-6951.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-01-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273038#comment-14273038
 ] 

Mark Miller commented on SOLR-6915:
---

+1, looks great.

Comments:

Might be worth calling out the hadoop version update in it's own issue.

Remember to make sure those new sha files go up with eol-syle:native properties 
for precommit.

 SaslZkACLProvider and Kerberos Test Using MiniKdc
 -

 Key: SOLR-6915
 URL: https://issues.apache.org/jira/browse/SOLR-6915
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-6915.patch


 We should provide a ZkACLProvider that requires SASL authentication.  This 
 provider will be useful for administration in a kerberos environment.   In 
 such an environment, the administrator wants solr to authenticate to 
 zookeeper using SASL, since this is only way to authenticate with zookeeper 
 via kerberos.
 The authorization model in such a setup can vary, e.g. you can imagine a 
 scenario where solr owns (is the only writer of) the non-config znodes, but 
 some set of trusted users are allowed to modify the configs.  It's hard to 
 predict all the possibilities here, but one model that seems generally useful 
 is to have a model where solr itself owns all the znodes and all actions that 
 require changing the znodes are routed to Solr APIs.  That seems simple and 
 reasonable as a first version.
 As for testing, I noticed while working on SOLR-6625 that we don't really 
 have any infrastructure for testing kerberos integration in unit tests.  
 Internally, I've been testing using kerberos-enabled VM clusters, but this 
 isn't great since we won't notice any breakages until someone actually spins 
 up a VM.  So part of this JIRA is to provide some infrastructure for testing 
 kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6324) Set finite default timeouts for select and update

2015-01-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6324.
---
Resolution: Fixed

 Set finite default timeouts for select and update
 -

 Key: SOLR-6324
 URL: https://issues.apache.org/jira/browse/SOLR-6324
 Project: Solr
  Issue Type: Improvement
  Components: search, update
Reporter: Ramkumar Aiyengar
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk


 Currently {{HttpShardHandlerFactory}} and {{UpdateShardHandler}} default to 
 infinite timeouts for socket connection and read. This can lead to 
 undesirable behaviour, for example, if a machine crashes, then searches in 
 progress will wait forever for a result to come back and end up using threads 
 which will only get terminated at shutdown.
 We should have some finite default, however conservative it might be. These 
 parameters are already configurable, so for expert uses, they can be 
 increased if necessary anyway.
 Will attach a patch to set connection timeout to 60s and read timeout to 
 600s, but I am not too concerned about the actual value as long as there is 
 one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6324) Set finite default timeouts for select and update

2015-01-11 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273040#comment-14273040
 ] 

Mark Miller commented on SOLR-6324:
---

I keep forgetting to use the message tag to close the pull request. I'll try 
and remember that in the future.

 Set finite default timeouts for select and update
 -

 Key: SOLR-6324
 URL: https://issues.apache.org/jira/browse/SOLR-6324
 Project: Solr
  Issue Type: Improvement
  Components: search, update
Reporter: Ramkumar Aiyengar
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk


 Currently {{HttpShardHandlerFactory}} and {{UpdateShardHandler}} default to 
 infinite timeouts for socket connection and read. This can lead to 
 undesirable behaviour, for example, if a machine crashes, then searches in 
 progress will wait forever for a result to come back and end up using threads 
 which will only get terminated at shutdown.
 We should have some finite default, however conservative it might be. These 
 parameters are already configurable, so for expert uses, they can be 
 increased if necessary anyway.
 Will attach a patch to set connection timeout to 60s and read timeout to 
 600s, but I am not too concerned about the actual value as long as there is 
 one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273002#comment-14273002
 ] 

Erik Hatcher commented on SOLR-6959:


The url to post files to is determined on a per-file basis, which could be a 
directory of files where .xml files go to /update and .pdf files go to 
/update/extract.   The logging message does qualify that it is the base URL.

Would you want the URL logged for *every* file?

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools

 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273012#comment-14273012
 ] 

Alexandre Rafalovitch commented on SOLR-6959:
-

This is a very interesting and educational question. The fact that the 
*/update* is a *base* is not well explained anywhere. I just run the test
{quote}
java -Durl=http://localhost:8983/solr/techproducts/update2 -Dauto -jar post.jar 
*
{quote}

And it did do *POST /solr/techproducts/update2/extract* for the PDF file. Not 
what I expected somehow.

My main concern is reducing the magic through a better message. If somebody 
posted a file and something unexpected happened, they would troubleshoot it by 
following the _request handler_ and it's parameters as one of the steps. But we 
don't tell them here which request handler it is. We give only one piece of 
information here that just happen to also be a valid _request handler_.

They could pick that information up from the log file I guess if they had 
access to it and knew what to look for. But it would be easier if the tool was 
more clear about it, as it does not know exactly what happened.

What if we add something like this to the message:
{quote}
POSTing file books.csv (text/csv) to \[base]
POSTing file solr-word.pdf (application/pdf) to \[base]/extract
{quote}

Where the word \[base] is just that - the word.

This could also clarify a bit the situation with the fact that XML, CSV, and 
JSON go to the same handler, yet we have - slightly confusingly - request 
handlers for both CSV and JSON in the solrconfig.xml.

The help message for the tool needs to be improved as well. It says 
*solr-update-url* and nothing about base and suffixes.

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools

 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-6959:
--

Assignee: Erik Hatcher

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools

 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272996#comment-14272996
 ] 

Alexandre Rafalovitch commented on SOLR-6959:
-

Also, at least the parameters passed with -Dparams are shown in that log 
message. The PDF code adds some parameters internally (like literal.id). Should 
they be shown as well? They are very long though (full file path).

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools

 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1920 - Failure!

2015-01-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1920/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestReplicaProperties.testDistribSearch

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:53632/kefru, https://127.0.0.1:53625/kefru, 
https://127.0.0.1:53628/kefru, https://127.0.0.1:53622/kefru, 
https://127.0.0.1:53618/kefru]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:53632/kefru, 
https://127.0.0.1:53625/kefru, https://127.0.0.1:53628/kefru, 
https://127.0.0.1:53622/kefru, https://127.0.0.1:53618/kefru]
at 
__randomizedtesting.SeedInfo.seed([DBBF5F5F6F6B69E0:5A59D147183409DC]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:332)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1015)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.ReplicaPropertiesBase.doPropertyAction(ReplicaPropertiesBase.java:51)
at 
org.apache.solr.cloud.TestReplicaProperties.clusterAssignPropertyTest(TestReplicaProperties.java:196)
at 
org.apache.solr.cloud.TestReplicaProperties.doTest(TestReplicaProperties.java:80)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Commented] (SOLR-6324) Set finite default timeouts for select and update

2015-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273043#comment-14273043
 ] 

ASF GitHub Bot commented on SOLR-6324:
--

Github user andyetitmoves closed the pull request at:

https://github.com/apache/lucene-solr/pull/79


 Set finite default timeouts for select and update
 -

 Key: SOLR-6324
 URL: https://issues.apache.org/jira/browse/SOLR-6324
 Project: Solr
  Issue Type: Improvement
  Components: search, update
Reporter: Ramkumar Aiyengar
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk


 Currently {{HttpShardHandlerFactory}} and {{UpdateShardHandler}} default to 
 infinite timeouts for socket connection and read. This can lead to 
 undesirable behaviour, for example, if a machine crashes, then searches in 
 progress will wait forever for a result to come back and end up using threads 
 which will only get terminated at shutdown.
 We should have some finite default, however conservative it might be. These 
 parameters are already configurable, so for expert uses, they can be 
 increased if necessary anyway.
 Will attach a patch to set connection timeout to 60s and read timeout to 
 600s, but I am not too concerned about the actual value as long as there is 
 one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Set default connTimeout to 60s, soTimeou...

2015-01-11 Thread andyetitmoves
Github user andyetitmoves closed the pull request at:

https://github.com/apache/lucene-solr/pull/79


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 731 - Still Failing

2015-01-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/731/

6 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:10101/_v/b/c8n_1x2_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:10101/_v/b/c8n_1x2_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([809913DDB988E27B:17F9DC5CED78247]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1318: POMs out of sync

2015-01-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1318/

No tests ran.

Build Log:
[...truncated 39082 lines...]
  [mvn] [INFO] -
  [mvn] [INFO] -
  [mvn] [ERROR] COMPILATION ERROR : 
  [mvn] [INFO] -

[...truncated 696 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:542:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:204:
 The following error occurred while executing this line:
: Java returned: 1

Total time: 20 minutes 15 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-6943) HdfsDirectoryFactory should fall back to system props for most of it's config if it is not found in solrconfig.xml.

2015-01-11 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-6943:
--
Attachment: SOLR-6943.patch

Patch adds testing.

 HdfsDirectoryFactory should fall back to system props for most of it's config 
 if it is not found in solrconfig.xml.
 ---

 Key: SOLR-6943
 URL: https://issues.apache.org/jira/browse/SOLR-6943
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-6943.patch, SOLR-6943.patch


 The new server and config sets has undone the work I did to make hdfs easy 
 out of the box. Rather than count on config for that, we should just allow 
 most of this config to be specified at the sys property level. This improves 
 the global cache config situation as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6581) Efficient DocValues support and numeric collapse field implementations for Collapse and Expand

2015-01-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6581:
-
Summary: Efficient DocValues support and numeric collapse field 
implementations for Collapse and Expand  (was: Prepare CollapsingQParserPlugin 
and ExpandComponent for 5.0)

 Efficient DocValues support and numeric collapse field implementations for 
 Collapse and Expand
 --

 Key: SOLR-6581
 URL: https://issues.apache.org/jira/browse/SOLR-6581
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, renames.diff


 *Background*
 The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
 are optimized to work with a top level FieldCache. Top level FieldCaches have 
 a very fast docID to top-level ordinal lookup. Fast access to the top-level 
 ordinals allows for very high performance field collapsing on high 
 cardinality fields. 
 LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
 FieldCache is no longer in regular use. Instead all top level caches are 
 accessed through MultiDocValues. 
 There are some major advantages of using the MultiDocValues rather then a top 
 level FieldCache. But there is one disadvantage, the lookup from docId to 
 top-level ordinals is slower using MultiDocValues.
 My testing has shown that *after optimizing* the CollapsingQParserPlugin code 
 to use MultiDocValues, the performance drop is around 100%.  For some use 
 cases this performance drop is a blocker.
 *What About Faceting?*
 String faceting also relies on the top level ordinals. Is faceting 
 performance affected also? My testing has shown that the faceting performance 
 is affected much less then collapsing. 
 One possible reason for this may be that field collapsing is memory bound and 
 faceting is not. So the additional memory accesses needed for MultiDocValues 
 affects field collapsing much more then faceting.
 *Proposed Solution*
 The proposed solution is to have the default Collapse and Expand algorithm 
 use MultiDocValues, but to provide an option to use a top level FieldCache if 
 the performance of MultiDocValues is a blocker.
 The proposed mechanism for switching to the FieldCache would be a new hint 
 parameter. If the hint parameter is set to FAST_QUERY then the top-level 
 FieldCache would be used for both Collapse and Expand.
 Example syntax:
 {code}
 fq={!collapse field=x hint=FAST_QUERY}
 {code}
  
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6581) Efficient DocValues support and numeric collapse field implementations for Collapse and Expand

2015-01-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6581:
-
Description: 
*Background*

The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
are optimized to work with a top level FieldCache. Top level FieldCaches have a 
very fast docID to top-level ordinal lookup. Fast access to the top-level 
ordinals allows for very high performance field collapsing on high cardinality 
fields. 

LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
FieldCache is no longer in regular use. Instead all top level caches are 
accessed through MultiDocValues. 

There are some major advantages of using the MultiDocValues rather then a top 
level FieldCache. But there is one disadvantage, the lookup from docId to 
top-level ordinals is slower using MultiDocValues.

My testing has shown that *after optimizing* the CollapsingQParserPlugin code 
to use MultiDocValues, the performance drop is around 100%.  For some use cases 
this performance drop is a blocker.

*What About Faceting?*

String faceting also relies on the top level ordinals. Is faceting performance 
affected also? My testing has shown that the faceting performance is affected 
much less then collapsing. 

One possible reason for this may be that field collapsing is memory bound and 
faceting is not. So the additional memory accesses needed for MultiDocValues 
affects field collapsing much more then faceting.

*Proposed Solution*

The proposed solution is to have the default Collapse and Expand algorithm use 
MultiDocValues, but to provide an option to use a top level FieldCache if the 
performance of MultiDocValues is a blocker.

The proposed mechanism for switching to the FieldCache would be a new hint 
parameter. If the hint parameter is set to FAST_QUERY then the top-level 
FieldCache would be used for both Collapse and Expand.

Example syntax:
{code}
fq={!collapse field=x hint=FAST_QUERY}
{code}

*Numeric Collapse Fields*

This ticket also adds numeric collapse field implementations.




 







 






  was:
*Background*

The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
are optimized to work with a top level FieldCache. Top level FieldCaches have a 
very fast docID to top-level ordinal lookup. Fast access to the top-level 
ordinals allows for very high performance field collapsing on high cardinality 
fields. 

LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
FieldCache is no longer in regular use. Instead all top level caches are 
accessed through MultiDocValues. 

There are some major advantages of using the MultiDocValues rather then a top 
level FieldCache. But there is one disadvantage, the lookup from docId to 
top-level ordinals is slower using MultiDocValues.

My testing has shown that *after optimizing* the CollapsingQParserPlugin code 
to use MultiDocValues, the performance drop is around 100%.  For some use cases 
this performance drop is a blocker.

*What About Faceting?*

String faceting also relies on the top level ordinals. Is faceting performance 
affected also? My testing has shown that the faceting performance is affected 
much less then collapsing. 

One possible reason for this may be that field collapsing is memory bound and 
faceting is not. So the additional memory accesses needed for MultiDocValues 
affects field collapsing much more then faceting.

*Proposed Solution*

The proposed solution is to have the default Collapse and Expand algorithm use 
MultiDocValues, but to provide an option to use a top level FieldCache if the 
performance of MultiDocValues is a blocker.

The proposed mechanism for switching to the FieldCache would be a new hint 
parameter. If the hint parameter is set to FAST_QUERY then the top-level 
FieldCache would be used for both Collapse and Expand.

Example syntax:
{code}
fq={!collapse field=x hint=FAST_QUERY}
{code}






 







 







 Efficient DocValues support and numeric collapse field implementations for 
 Collapse and Expand
 --

 Key: SOLR-6581
 URL: https://issues.apache.org/jira/browse/SOLR-6581
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, renames.diff


 *Background*
 The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
 are optimized to work with a top level FieldCache. Top level FieldCaches have 
 a very fast 

[jira] [Updated] (SOLR-6581) Efficient DocValues support and numeric collapse field implementations for Collapse and Expand

2015-01-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6581:
-
Description: 
The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
are optimized to work with a top level FieldCache. Top level FieldCaches have a 
very fast docID to top-level ordinal lookup. Fast access to the top-level 
ordinals allows for very high performance field collapsing on high cardinality 
fields. 

LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
FieldCache is no longer in regular use. Instead all top level caches are 
accessed through MultiDocValues. 

This ticket does the following:

1) Optimizes Collapse and Expand to use MultiDocValues and makes this the 
default approach when collapsing on String fields

2) Provides an option to use a top level FieldCache if the performance of 
MultiDocValues is a blocker. The mechanism for switching to the FieldCache is a 
new hint parameter. If the hint parameter is set to FAST_QUERY then the 
top-level FieldCache would be used for both Collapse and Expand.

Example syntax:
{code}
fq={!collapse field=x hint=FAST_QUERY}
{code}

3)  Adds numeric collapse field implementations.







 






  was:
*Background*

The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
are optimized to work with a top level FieldCache. Top level FieldCaches have a 
very fast docID to top-level ordinal lookup. Fast access to the top-level 
ordinals allows for very high performance field collapsing on high cardinality 
fields. 

LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
FieldCache is no longer in regular use. Instead all top level caches are 
accessed through MultiDocValues. 

There are some major advantages of using the MultiDocValues rather then a top 
level FieldCache. But there is one disadvantage, the lookup from docId to 
top-level ordinals is slower using MultiDocValues.

My testing has shown that *after optimizing* the CollapsingQParserPlugin code 
to use MultiDocValues, the performance drop is around 100%.  For some use cases 
this performance drop is a blocker.

*What About Faceting?*

String faceting also relies on the top level ordinals. Is faceting performance 
affected also? My testing has shown that the faceting performance is affected 
much less then collapsing. 

One possible reason for this may be that field collapsing is memory bound and 
faceting is not. So the additional memory accesses needed for MultiDocValues 
affects field collapsing much more then faceting.

*Proposed Solution*

The proposed solution is to have the default Collapse and Expand algorithm use 
MultiDocValues, but to provide an option to use a top level FieldCache if the 
performance of MultiDocValues is a blocker.

The proposed mechanism for switching to the FieldCache would be a new hint 
parameter. If the hint parameter is set to FAST_QUERY then the top-level 
FieldCache would be used for both Collapse and Expand.

Example syntax:
{code}
fq={!collapse field=x hint=FAST_QUERY}
{code}

*Numeric Collapse Fields*

This ticket also adds numeric collapse field implementations.




 







 







 Efficient DocValues support and numeric collapse field implementations for 
 Collapse and Expand
 --

 Key: SOLR-6581
 URL: https://issues.apache.org/jira/browse/SOLR-6581
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, renames.diff


 The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
 are optimized to work with a top level FieldCache. Top level FieldCaches have 
 a very fast docID to top-level ordinal lookup. Fast access to the top-level 
 ordinals allows for very high performance field collapsing on high 
 cardinality fields. 
 LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
 FieldCache is no longer in regular use. Instead all top level caches are 
 accessed through MultiDocValues. 
 This ticket does the following:
 1) Optimizes Collapse and Expand to use MultiDocValues and makes this the 
 default approach when collapsing on String fields
 2) Provides an option to use a top level FieldCache if the performance of 
 MultiDocValues is a blocker. The mechanism for switching to the FieldCache is 
 a new hint parameter. If the hint parameter is set to FAST_QUERY then the 
 top-level FieldCache would be used for both 

[jira] [Updated] (SOLR-6845) figure out why suggester causes slow startup - even when not used

2015-01-11 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-6845:

Attachment: SOLR-6845.patch

In this patch, I added a “buildOnStartup” for the suggester that defaults to 
false. If this is not set, the suggester will load a dictionary if it exists, 
but won’t create it if it doesn’t.

buildOnStartup will also build the suggester in case of a core reload. Users 
should be aware that in both, “SolrCloud mode” and with a “master-slave” setup, 
Solr may trigger a core reload internally, and if “buildOnStartup” is set, the 
core reload will build the suggester (if it's not being stored). Unlike with 
the current code a core reload won’t trigger a “buildOnCommit” event. 

A side note is that, even if “useColdSearcher” is set to “false”, in case of a 
core reload the suggester may be built after the first searcher is registered, 
the reason for this is that it is built using the second searcher created in 
the core reload process.



 figure out why suggester causes slow startup - even when not used
 -

 Key: SOLR-6845
 URL: https://issues.apache.org/jira/browse/SOLR-6845
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: SOLR-6845.patch, SOLR-6845.patch


 SOLR-6679 was filed to track the investigation into the following problem...
 {panel}
 The stock solrconfig provides a bad experience with a large index... start up 
 Solr and it will spin at 100% CPU for minutes, unresponsive, while it 
 apparently builds a suggester index.
 ...
 This is what I did:
 1) indexed 10M very small docs (only takes a few minutes).
 2) shut down Solr
 3) start up Solr and watch it be unresponsive for over 4 minutes!
 I didn't even use any of the fields specified in the suggester config and I 
 never called the suggest request handler.
 {panel}
 ..but ultimately focused on removing/disabling the suggester from the sample 
 configs.
 Opening this new issue to focus on actually trying to identify the root 
 problem  fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 1878 - Failure!

2015-01-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1878/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDistribSearch

Error Message:
Could not successfully add blob after 150 attempts. Expecting 2 items. time 
elapsed 16.276  output  for url is {   responseHeader:{ status:0, 
QTime:0},   response:{ numFound:1, start:0, docs:[{   
  id:test/1, md5:7417c0f7fa8af3953395492339d9aaa7, 
blobName:test, version:1, 
timestamp:2015-01-11T21:15:05.458Z, size:5323}]}}

Stack Trace:
java.lang.AssertionError: Could not successfully add blob after 150 attempts. 
Expecting 2 items. time elapsed 16.276  output  for url is {
  responseHeader:{
status:0,
QTime:0},
  response:{
numFound:1,
start:0,
docs:[{
id:test/1,
md5:7417c0f7fa8af3953395492339d9aaa7,
blobName:test,
version:1,
timestamp:2015-01-11T21:15:05.458Z,
size:5323}]}}
at 
__randomizedtesting.SeedInfo.seed([475BDA1833DCDE7:859333B9F462ADDB]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestBlobHandler.postAndCheck(TestBlobHandler.java:150)
at 
org.apache.solr.core.TestDynamicLoading.dynamicLoading(TestDynamicLoading.java:114)
at 
org.apache.solr.core.TestDynamicLoading.doTest(TestDynamicLoading.java:70)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273114#comment-14273114
 ] 

ASF subversion and git services commented on SOLR-6959:
---

Commit 1651016 from [~ehatcher] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1651016 ]

SOLR-6959: Elaborate on URLs being POSTed to (merged from trunk r1651013)

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools

 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6962) bin/solr stop -a should complain about missing parameter

2015-01-11 Thread Alexandre Rafalovitch (JIRA)
Alexandre Rafalovitch created SOLR-6962:
---

 Summary: bin/solr stop -a should complain about missing parameter
 Key: SOLR-6962
 URL: https://issues.apache.org/jira/browse/SOLR-6962
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Priority: Minor


*bin/solr* has a *-a* option that expects a second parameter. If one is not 
provided, it hangs.  It should complain and  exit just like *-e* option does.

The most common time I hit this is when I try to do *bin/solr stop \-all* and 
instead just type *bin/solr stop \-a* as I am more used to give full name 
options with double-dash prefix (Unix conventions, I guess).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6580) facet(.query) responses duplicated

2015-01-11 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273142#comment-14273142
 ] 

Alexandre Rafalovitch commented on SOLR-6580:
-

Probably SOLR-6780

 facet(.query) responses duplicated
 --

 Key: SOLR-6580
 URL: https://issues.apache.org/jira/browse/SOLR-6580
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10, 4.10.1
Reporter: Erik Hatcher
 Fix For: 5.0, Trunk


 I uncommented the invariants of the standard request handler commented out in 
 the default example solrconfig.xml, restarted Solr, and made this request 
 {{http://localhost:8983/solr/collection1/select?q=*:*facet=onfacet.query=foorows=0}}
  and got duplicate responses back for the invariant price range facet.query's 
 (but no facet.query response for the query string provided one, as expected):
 {code}
 lst name=facet_queries
   int name=price:[* TO 500]14/int
   int name=price:[500 TO *]2/int
   int name=price:[* TO 500]14/int
   int name=price:[500 TO *]2/int
 /lst
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6702) Add facet.interval support to /browse GUI

2015-01-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-6702:
--

Assignee: Erik Hatcher

 Add facet.interval support to /browse GUI
 -

 Key: SOLR-6702
 URL: https://issues.apache.org/jira/browse/SOLR-6702
 Project: Solr
  Issue Type: Task
  Components: contrib - Velocity
Affects Versions: 4.10.2
Reporter: Jan Høydahl
Assignee: Erik Hatcher
  Labels: velocity
 Fix For: Trunk


 Now that we have the new [Interval 
 faceting|https://cwiki.apache.org/confluence/display/solr/Faceting#Faceting-IntervalFaceting]
  it should show in Velocity /browse GUI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6702) Add facet.interval support to /browse GUI

2015-01-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6702:
---
Fix Version/s: (was: 5.0)

 Add facet.interval support to /browse GUI
 -

 Key: SOLR-6702
 URL: https://issues.apache.org/jira/browse/SOLR-6702
 Project: Solr
  Issue Type: Task
  Components: contrib - Velocity
Affects Versions: 4.10.2
Reporter: Jan Høydahl
Assignee: Erik Hatcher
  Labels: velocity
 Fix For: Trunk


 Now that we have the new [Interval 
 faceting|https://cwiki.apache.org/confluence/display/solr/Faceting#Faceting-IntervalFaceting]
  it should show in Velocity /browse GUI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273176#comment-14273176
 ] 

Alexandre Rafalovitch commented on SOLR-6959:
-

This output is in my book's current draft. You bet I don't want to explain why 
two different invocations do different things. Unless they actually do 
different things. :-)

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.3
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools
 Fix For: 5.0, Trunk


 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2466 - Still Failing

2015-01-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2466/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:36751/_c/z/c8n_1x2_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:36751/_c/z/c8n_1x2_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([90099A4475D0604B:11EF145C028F0077]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-6952) Re-using data-driven configsets by default is not helpful

2015-01-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6952:
---
Summary: Re-using data-driven configsets by default is not helpful  (was: 
Copying data-driven configsets by default is not helpful)

 Re-using data-driven configsets by default is not helpful
 -

 Key: SOLR-6952
 URL: https://issues.apache.org/jira/browse/SOLR-6952
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 5.0
Reporter: Grant Ingersoll
Assignee: Timothy Potter
 Fix For: 5.0


 When creating collections (I'm using the bin/solr scripts), I don't think we 
 should automatically copy configsets, especially when running in getting 
 started mode or data driven mode.
 I did the following:
 {code}
 bin/solr create_collection -n foo
 bin/post foo some_data.csv
 {code}
 I then created a second collection with the intention of sending in the same 
 data, but this time run through a python script that changed a value from an 
 int to a string (since it was an enumerated type) and was surprised to see 
 that I got:
 {quote}
 Caused by: java.lang.NumberFormatException: For input string: NA
   at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
   at java.lang.Long.parseLong(Long.java:441)
 {quote}
 for my new version of the data that passes in a string instead of an int, as 
 this new collection had only seen strings for that field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6952) Re-using data-driven configsets by default is not helpful

2015-01-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6952:
---
Description: 
When creating collections (I'm using the bin/solr scripts), I think we should 
automatically copy configsets, especially when running in getting started 
mode or data driven mode.

I did the following:
{code}
bin/solr create_collection -n foo
bin/post foo some_data.csv
{code}

I then created a second collection with the intention of sending in the same 
data, but this time run through a python script that changed a value from an 
int to a string (since it was an enumerated type) and was surprised to see that 
I got:
{quote}
Caused by: java.lang.NumberFormatException: For input string: NA
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:441)
{quote}

for my new version of the data that passes in a string instead of an int, as 
this new collection had only seen strings for that field.

  was:
When creating collections (I'm using the bin/solr scripts), I don't think we 
should automatically copy configsets, especially when running in getting 
started mode or data driven mode.

I did the following:
{code}
bin/solr create_collection -n foo
bin/post foo some_data.csv
{code}

I then created a second collection with the intention of sending in the same 
data, but this time run through a python script that changed a value from an 
int to a string (since it was an enumerated type) and was surprised to see that 
I got:
{quote}
Caused by: java.lang.NumberFormatException: For input string: NA
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:441)
{quote}

for my new version of the data that passes in a string instead of an int, as 
this new collection had only seen strings for that field.


 Re-using data-driven configsets by default is not helpful
 -

 Key: SOLR-6952
 URL: https://issues.apache.org/jira/browse/SOLR-6952
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 5.0
Reporter: Grant Ingersoll
Assignee: Timothy Potter
 Fix For: 5.0


 When creating collections (I'm using the bin/solr scripts), I think we should 
 automatically copy configsets, especially when running in getting started 
 mode or data driven mode.
 I did the following:
 {code}
 bin/solr create_collection -n foo
 bin/post foo some_data.csv
 {code}
 I then created a second collection with the intention of sending in the same 
 data, but this time run through a python script that changed a value from an 
 int to a string (since it was an enumerated type) and was surprised to see 
 that I got:
 {quote}
 Caused by: java.lang.NumberFormatException: For input string: NA
   at 
 java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
   at java.lang.Long.parseLong(Long.java:441)
 {quote}
 for my new version of the data that passes in a string instead of an int, as 
 this new collection had only seen strings for that field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6963) Upgrade hadoop version to 2.3

2015-01-11 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-6963:
-
Attachment: SOLR-6963.patch

 Upgrade hadoop version to 2.3
 -

 Key: SOLR-6963
 URL: https://issues.apache.org/jira/browse/SOLR-6963
 Project: Solr
  Issue Type: Task
  Components: Hadoop Integration
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-6963.patch


 See SOLR-6915; we need at least hadoop version 2.3 to be able to use the 
 MiniKdc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2467 - Still Failing

2015-01-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2467/

5 tests failed.
REGRESSION:  org.apache.solr.util.SimplePostToolTest.testTypeSupported

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([FEF4643E9D010F3E:6A4052DD8A3EAE51]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.util.SimplePostToolTest.testTypeSupported(SimplePostToolTest.java:116)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:

[jira] [Commented] (SOLR-6957) Improve nightly test run stability.

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273087#comment-14273087
 ] 

ASF subversion and git services commented on SOLR-6957:
---

Commit 1650987 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1650987 ]

SOLR-6957: Raise timeout.

 Improve nightly test run stability.
 ---

 Key: SOLR-6957
 URL: https://issues.apache.org/jira/browse/SOLR-6957
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
 Fix For: 5.0, Trunk






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6908) SimplePostTool's help message is incorrect -Durl parameter

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273109#comment-14273109
 ] 

ASF subversion and git services commented on SOLR-6908:
---

Commit 1651013 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1651013 ]

SOLR-6908: Remove last post.sh references, and fix a post.jar usage example

 SimplePostTool's help message is incorrect -Durl parameter
 --

 Key: SOLR-6908
 URL: https://issues.apache.org/jira/browse/SOLR-6908
 Project: Solr
  Issue Type: Bug
  Components: documentation
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0, Trunk


 {quote}
 java -jar post.jar -h
 ...
 java -Durl=http://localhost:8983/solr/update/extract -Dparams=literal.id=a 
 -Dtype=application/pdf -jar post.jar a.pdf
 ...
 {quote}
 The example is the only one for -Durl and is not correct as it is missing the 
 collection name. Also, even though this is an example, *a.pdf* does not 
 exist, but we do have *solr-word.pdf* now.
 So, this should probably say:
 {quote}
 java -Durl=http://localhost:8983/solr/techproducts/update/extract 
 -Dparams=literal.id=pdf1 -Dtype=application/pdf -jar post.jar solr-word.pdf
 {quote}
 Also, it is worth mentioning (if true) that specifying *-Durl* overrides 
 *-Dc*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6580) facet(.query) responses duplicated

2015-01-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-6580.

Resolution: Not a Problem

This was fixed (not sure what JIRA that was under, but I believe it was 
[~hossman_luc...@fucit.org] that took care of it.

 facet(.query) responses duplicated
 --

 Key: SOLR-6580
 URL: https://issues.apache.org/jira/browse/SOLR-6580
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10, 4.10.1
Reporter: Erik Hatcher
 Fix For: 5.0, Trunk


 I uncommented the invariants of the standard request handler commented out in 
 the default example solrconfig.xml, restarted Solr, and made this request 
 {{http://localhost:8983/solr/collection1/select?q=*:*facet=onfacet.query=foorows=0}}
  and got duplicate responses back for the invariant price range facet.query's 
 (but no facet.query response for the query string provided one, as expected):
 {code}
 lst name=facet_queries
   int name=price:[* TO 500]14/int
   int name=price:[500 TO *]2/int
   int name=price:[* TO 500]14/int
   int name=price:[500 TO *]2/int
 /lst
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6581) Efficient DocValues support and numeric collapse field implementations for Collapse and Expand

2015-01-11 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273182#comment-14273182
 ] 

Joel Bernstein commented on SOLR-6581:
--

top_fc sounds good to me. I'll make the change.

 Efficient DocValues support and numeric collapse field implementations for 
 Collapse and Expand
 --

 Key: SOLR-6581
 URL: https://issues.apache.org/jira/browse/SOLR-6581
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 renames.diff


 The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
 are optimized to work with a top level FieldCache. Top level FieldCaches have 
 a very fast docID to top-level ordinal lookup. Fast access to the top-level 
 ordinals allows for very high performance field collapsing on high 
 cardinality fields. 
 LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
 FieldCache is no longer in regular use. Instead all top level caches are 
 accessed through MultiDocValues. 
 This ticket does the following:
 1) Optimizes Collapse and Expand to use MultiDocValues and makes this the 
 default approach when collapsing on String fields
 2) Provides an option to use a top level FieldCache if the performance of 
 MultiDocValues is a blocker. The mechanism for switching to the FieldCache is 
 a new hint parameter. If the hint parameter is set to FAST_QUERY then the 
 top-level FieldCache would be used for both Collapse and Expand.
 Example syntax:
 {code}
 fq={!collapse field=x hint=FAST_QUERY}
 {code}
 3)  Adds numeric collapse field implementations.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b44) - Build # 11577 - Failure!

2015-01-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11577/
Java: 64bit/jdk1.9.0-ea-b44 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.util.SimplePostToolTest.testTypeSupported

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([1E71DDA497742142:8AC5EB47804B802D]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.util.SimplePostToolTest.testTypeSupported(SimplePostToolTest.java:116)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 9322 lines...]
   [junit4] Suite: org.apache.solr.util.SimplePostToolTest
   [junit4]   2 Creating dataDir: 

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2465 - Still Failing

2015-01-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2465/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:64223/jf_wm/z/c8n_1x2_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:64223/jf_wm/z/c8n_1x2_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([2115592817C900A6:A0F3D7306096609A]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Resolved] (SOLR-6434) Solr startup script improvements

2015-01-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-6434.

Resolution: Fixed
  Assignee: Erik Hatcher

typo fixed.  let's call this issue resolved: log situation resolved already.  
duplication issue isn't really deserving a change just yet, at second glance.

 Solr startup script improvements
 

 Key: SOLR-6434
 URL: https://issues.apache.org/jira/browse/SOLR-6434
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.10
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Fix For: 5.0, Trunk


 The startup scripts are new and evolving.  This issue is to capture a handful 
 of minor improvements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_72) - Build # 4306 - Still Failing!

2015-01-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4306/
Java: 32bit/jdk1.7.0_72 -client -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerCloud.testDistribSearch

Error Message:
Could not get expected value  A val for path [params, a] full output null

Stack Trace:
java.lang.AssertionError: Could not get expected value  A val for path [params, 
a] full output null
at 
__randomizedtesting.SeedInfo.seed([13831AF3563D6611:926594EB2162062D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:259)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.testReqParams(TestSolrConfigHandlerCloud.java:137)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.doTest(TestSolrConfigHandlerCloud.java:70)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Resolved] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-6959.

Resolution: Fixed

I improved the output like [~arafalov] proposed.

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools

 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6434) Solr startup script improvements

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273126#comment-14273126
 ] 

ASF subversion and git services commented on SOLR-6434:
---

Commit 1651019 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1651019 ]

SOLR-6434: Fix typo

 Solr startup script improvements
 

 Key: SOLR-6434
 URL: https://issues.apache.org/jira/browse/SOLR-6434
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.10
Reporter: Erik Hatcher
Priority: Critical
 Fix For: 5.0, Trunk


 The startup scripts are new and evolving.  This issue is to capture a handful 
 of minor improvements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6434) Solr startup script improvements

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273127#comment-14273127
 ] 

ASF subversion and git services commented on SOLR-6434:
---

Commit 1651020 from [~ehatcher] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1651020 ]

SOLR-6434: Fix typo (merged from trunk r1651019)

 Solr startup script improvements
 

 Key: SOLR-6434
 URL: https://issues.apache.org/jira/browse/SOLR-6434
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.10
Reporter: Erik Hatcher
Priority: Critical
 Fix For: 5.0, Trunk


 The startup scripts are new and evolving.  This issue is to capture a handful 
 of minor improvements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273171#comment-14273171
 ] 

ASF subversion and git services commented on SOLR-6959:
---

Commit 1651028 from [~ehatcher] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1651028 ]

SOLR-6959: standardize XML content-type (merged from trunk r1651027)

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.3
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools
 Fix For: 5.0, Trunk


 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273170#comment-14273170
 ] 

ASF subversion and git services commented on SOLR-6959:
---

Commit 1651027 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1651027 ]

SOLR-6959: standardize XML content-type

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.3
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools
 Fix For: 5.0, Trunk


 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273113#comment-14273113
 ] 

ASF subversion and git services commented on SOLR-6959:
---

Commit 1651015 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1651015 ]

SOLR-6959: Elaborate on URLs being POSTed to

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools

 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6581) Efficient DocValues support and numeric collapse field implementations for Collapse and Expand

2015-01-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6581:
-
Attachment: SOLR-6581.patch

unit test are passing, manual testing looks good, pre-commit passes. 

 Efficient DocValues support and numeric collapse field implementations for 
 Collapse and Expand
 --

 Key: SOLR-6581
 URL: https://issues.apache.org/jira/browse/SOLR-6581
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 renames.diff


 The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
 are optimized to work with a top level FieldCache. Top level FieldCaches have 
 a very fast docID to top-level ordinal lookup. Fast access to the top-level 
 ordinals allows for very high performance field collapsing on high 
 cardinality fields. 
 LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
 FieldCache is no longer in regular use. Instead all top level caches are 
 accessed through MultiDocValues. 
 This ticket does the following:
 1) Optimizes Collapse and Expand to use MultiDocValues and makes this the 
 default approach when collapsing on String fields
 2) Provides an option to use a top level FieldCache if the performance of 
 MultiDocValues is a blocker. The mechanism for switching to the FieldCache is 
 a new hint parameter. If the hint parameter is set to FAST_QUERY then the 
 top-level FieldCache would be used for both Collapse and Expand.
 Example syntax:
 {code}
 fq={!collapse field=x hint=FAST_QUERY}
 {code}
 3)  Adds numeric collapse field implementations.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6957) Improve nightly test run stability.

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273084#comment-14273084
 ] 

ASF subversion and git services commented on SOLR-6957:
---

Commit 1650984 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1650984 ]

SOLR-6957: Raise timeout.

 Improve nightly test run stability.
 ---

 Key: SOLR-6957
 URL: https://issues.apache.org/jira/browse/SOLR-6957
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
 Fix For: 5.0, Trunk






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273119#comment-14273119
 ] 

Erik Hatcher commented on SOLR-6959:


bq. This could also clarify a bit the situation with the fact that XML, CSV, 
and JSON go to the same handler, yet we have - slightly confusingly - request 
handlers for both CSV and JSON in the solrconfig.xml

Well, if someone is using post.jar, chances are he/she isn't aware of the 
additional handlers that you mention so there wouldn't be any confusion I don't 
think.  Those handlers are just there for backwards compatibility (or for 
aesthetics, if one likes to post to, say, /update/csv).   I don't think we need 
to do anything different here.

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools

 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273133#comment-14273133
 ] 

Alexandre Rafalovitch commented on SOLR-6959:
-

Looks good. Except this now uncovers a little wrinkle:
{quote}
$ java -Dc=techproducts -jar post.jar hd.xml
SimplePostTool version 1.5
Posting files to \[base] url http://localhost:8983/solr/techproducts/update 
using content-type application/xml...
POSTing file hd.xml to \[base]
{quote}

vs.

{quote}
$ java -Dc=techproducts -Dauto -jar post.jar hd.xml
SimplePostTool version 1.5
Posting files to \[base] url http://localhost:8983/solr/techproducts/update...
Entering auto mode. File endings considered are 
xml,json,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
POSTing file hd.xml (text/xml) to \[base]
{quote}

Is there a reason we are using different content types for the same XML file 
with and without *-Dauto*?


 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.3
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools
 Fix For: 5.0, Trunk


 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6052) Too many documents Exception

2015-01-11 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273160#comment-14273160
 ] 

Erik Hatcher commented on SOLR-6052:


How is this a Solr bug?  Looks like the script creates too many documents, 
no? 

If this is an issue that needs to be addressed, please re-open with more 
elaboration on what is being attempted and what is expected here.

 Too many documents Exception
 

 Key: SOLR-6052
 URL: https://issues.apache.org/jira/browse/SOLR-6052
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3, 4.4, 4.5, 4.6, 4.7
Reporter: yamazaki

 ERROR org.apache.solr.core.CoreContainer  – Unable to create core: collection1
 org.apache.solr.common.SolrException: Error opening new searcher
   at org.apache.solr.core.SolrCore.init(SolrCore.java:821)
   at org.apache.solr.core.SolrCore.init(SolrCore.java:618)
   at 
 org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:949)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:984)
   at org.apache.solr.core.CoreContainer$2.call(CoreContainer.java:597)
   at org.apache.solr.core.CoreContainer$2.call(CoreContainer.java:592)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.solr.common.SolrException: Error opening new searcher
   at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1438)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1550)
   at org.apache.solr.core.SolrCore.init(SolrCore.java:796)
   ... 13 more
 Caused by: org.apache.solr.common.SolrException: Error opening Reader
   at 
 org.apache.solr.search.SolrIndexSearcher.getReader(SolrIndexSearcher.java:172)
   at 
 org.apache.solr.search.SolrIndexSearcher.init(SolrIndexSearcher.java:183)
   at 
 org.apache.solr.search.SolrIndexSearcher.init(SolrIndexSearcher.java:179)
   at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1414)
   ... 15 more
 Caused by: java.lang.IllegalArgumentException: Too many documents, composite 
 IndexReaders cannot exceed 2147483647
   at 
 org.apache.lucene.index.BaseCompositeReader.init(BaseCompositeReader.java:77)
   at 
 org.apache.lucene.index.DirectoryReader.init(DirectoryReader.java:368)
   at 
 org.apache.lucene.index.StandardDirectoryReader.init(StandardDirectoryReader.java:42)
   at 
 org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:71)
   at 
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
   at 
 org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
   at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:88)
   at 
 org.apache.solr.core.StandardIndexReaderFactory.newReader(StandardIndexReaderFactory.java:34)
   at 
 org.apache.solr.search.SolrIndexSearcher.getReader(SolrIndexSearcher.java:169)
   ... 18 more
 ERROR org.apache.solr.core.CoreContainer  – 
 null:org.apache.solr.common.SolrException: Unable to create core: collection1
   at 
 org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:1450)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:993)
   at org.apache.solr.core.CoreContainer$2.call(CoreContainer.java:597)
   at org.apache.solr.core.CoreContainer$2.call(CoreContainer.java:592)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.solr.common.SolrException: Error opening new searcher
   at org.apache.solr.core.SolrCore.init(SolrCore.java:821)
   at org.apache.solr.core.SolrCore.init(SolrCore.java:618)
   at 
 

[jira] [Commented] (SOLR-6581) Efficient DocValues support and numeric collapse field implementations for Collapse and Expand

2015-01-11 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273162#comment-14273162
 ] 

Yonik Seeley commented on SOLR-6581:


FAST_QUERY doesn't give much of an idea of what's going on under the covers, 
and a more descriptive name would probably be better if more 
methods/optimizations may be added in the future.  Maybe something like 
top_fc?  Probably best to stick to lower case too... 

 Efficient DocValues support and numeric collapse field implementations for 
 Collapse and Expand
 --

 Key: SOLR-6581
 URL: https://issues.apache.org/jira/browse/SOLR-6581
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
 renames.diff


 The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
 are optimized to work with a top level FieldCache. Top level FieldCaches have 
 a very fast docID to top-level ordinal lookup. Fast access to the top-level 
 ordinals allows for very high performance field collapsing on high 
 cardinality fields. 
 LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
 FieldCache is no longer in regular use. Instead all top level caches are 
 accessed through MultiDocValues. 
 This ticket does the following:
 1) Optimizes Collapse and Expand to use MultiDocValues and makes this the 
 default approach when collapsing on String fields
 2) Provides an option to use a top level FieldCache if the performance of 
 MultiDocValues is a blocker. The mechanism for switching to the FieldCache is 
 a new hint parameter. If the hint parameter is set to FAST_QUERY then the 
 top-level FieldCache would be used for both Collapse and Expand.
 Example syntax:
 {code}
 fq={!collapse field=x hint=FAST_QUERY}
 {code}
 3)  Adds numeric collapse field implementations.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6052) Too many documents Exception

2015-01-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-6052.

Resolution: Not a Problem

 Too many documents Exception
 

 Key: SOLR-6052
 URL: https://issues.apache.org/jira/browse/SOLR-6052
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.3, 4.4, 4.5, 4.6, 4.7
Reporter: yamazaki

 ERROR org.apache.solr.core.CoreContainer  – Unable to create core: collection1
 org.apache.solr.common.SolrException: Error opening new searcher
   at org.apache.solr.core.SolrCore.init(SolrCore.java:821)
   at org.apache.solr.core.SolrCore.init(SolrCore.java:618)
   at 
 org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:949)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:984)
   at org.apache.solr.core.CoreContainer$2.call(CoreContainer.java:597)
   at org.apache.solr.core.CoreContainer$2.call(CoreContainer.java:592)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.solr.common.SolrException: Error opening new searcher
   at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1438)
   at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1550)
   at org.apache.solr.core.SolrCore.init(SolrCore.java:796)
   ... 13 more
 Caused by: org.apache.solr.common.SolrException: Error opening Reader
   at 
 org.apache.solr.search.SolrIndexSearcher.getReader(SolrIndexSearcher.java:172)
   at 
 org.apache.solr.search.SolrIndexSearcher.init(SolrIndexSearcher.java:183)
   at 
 org.apache.solr.search.SolrIndexSearcher.init(SolrIndexSearcher.java:179)
   at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1414)
   ... 15 more
 Caused by: java.lang.IllegalArgumentException: Too many documents, composite 
 IndexReaders cannot exceed 2147483647
   at 
 org.apache.lucene.index.BaseCompositeReader.init(BaseCompositeReader.java:77)
   at 
 org.apache.lucene.index.DirectoryReader.init(DirectoryReader.java:368)
   at 
 org.apache.lucene.index.StandardDirectoryReader.init(StandardDirectoryReader.java:42)
   at 
 org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:71)
   at 
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
   at 
 org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
   at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:88)
   at 
 org.apache.solr.core.StandardIndexReaderFactory.newReader(StandardIndexReaderFactory.java:34)
   at 
 org.apache.solr.search.SolrIndexSearcher.getReader(SolrIndexSearcher.java:169)
   ... 18 more
 ERROR org.apache.solr.core.CoreContainer  – 
 null:org.apache.solr.common.SolrException: Unable to create core: collection1
   at 
 org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:1450)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:993)
   at org.apache.solr.core.CoreContainer$2.call(CoreContainer.java:597)
   at org.apache.solr.core.CoreContainer$2.call(CoreContainer.java:592)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.solr.common.SolrException: Error opening new searcher
   at org.apache.solr.core.SolrCore.init(SolrCore.java:821)
   at org.apache.solr.core.SolrCore.init(SolrCore.java:618)
   at 
 org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:949)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:984)
   ... 10 more
 Caused by: org.apache.solr.common.SolrException: Error opening new searcher
   at 

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_25) - Build # 4409 - Still Failing!

2015-01-11 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4409/
Java: 64bit/jdk1.8.0_25 -XX:-UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([6B621B280B470AA1]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:332)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:622)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:186)
at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
Clean up static fields (in @AfterClass?), your test seems to hang on to 
approximately 13,151,280 bytes (threshold is 10,485,760). Field reference sizes 
(counted individually):   - 14,836,544 bytes, protected static 
org.apache.solr.core.SolrConfig org.apache.solr.SolrTestCaseJ4.solrConfig   - 
14,291,056 bytes, protected static 
org.apache.solr.util.TestHarness$LocalRequestFactory 
org.apache.solr.SolrTestCaseJ4.lrf   - 14,290,656 bytes, protected static 
org.apache.solr.util.TestHarness org.apache.solr.SolrTestCaseJ4.h   - 448 
bytes, private static java.util.regex.Pattern 
org.apache.solr.SolrTestCaseJ4.nonEscapedSingleQuotePattern   - 312 bytes, 
private static java.util.regex.Pattern 
org.apache.solr.SolrTestCaseJ4.escapedSingleQuotePattern   - 296 bytes, public 
static org.junit.rules.TestRule org.apache.solr.SolrTestCaseJ4.solrClassRules   
- 280 bytes, public static java.io.File 
org.apache.solr.cloud.AbstractZkTestCase.SOLRHOME   - 232 bytes, protected 
static java.lang.String org.apache.solr.SolrTestCaseJ4.testSolrHome   - 144 
bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.factoryProp   - 88 bytes, protected static 
java.lang.String org.apache.solr.SolrTestCaseJ4.configString   - 80 bytes, 
private static java.lang.String org.apache.solr.SolrTestCaseJ4.coreName   - 80 
bytes, protected static java.lang.String 
org.apache.solr.SolrTestCaseJ4.schemaString

Stack Trace:
junit.framework.AssertionFailedError: Clean up static fields (in @AfterClass?), 
your test seems to hang on to approximately 13,151,280 bytes (threshold is 
10,485,760). Field reference sizes (counted individually):
  - 14,836,544 bytes, protected static org.apache.solr.core.SolrConfig 
org.apache.solr.SolrTestCaseJ4.solrConfig
  - 14,291,056 bytes, protected static 
org.apache.solr.util.TestHarness$LocalRequestFactory 
org.apache.solr.SolrTestCaseJ4.lrf
  - 14,290,656 bytes, protected static org.apache.solr.util.TestHarness 

[jira] [Commented] (SOLR-6908) SimplePostTool's help message is incorrect -Durl parameter

2015-01-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273110#comment-14273110
 ] 

ASF subversion and git services commented on SOLR-6908:
---

Commit 1651014 from [~ehatcher] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1651014 ]

SOLR-6908: Remove last post.sh references, and fix a post.jar usage example 
(merged from trunk r1651013)

 SimplePostTool's help message is incorrect -Durl parameter
 --

 Key: SOLR-6908
 URL: https://issues.apache.org/jira/browse/SOLR-6908
 Project: Solr
  Issue Type: Bug
  Components: documentation
Affects Versions: 5.0
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0, Trunk


 {quote}
 java -jar post.jar -h
 ...
 java -Durl=http://localhost:8983/solr/update/extract -Dparams=literal.id=a 
 -Dtype=application/pdf -jar post.jar a.pdf
 ...
 {quote}
 The example is the only one for -Durl and is not correct as it is missing the 
 collection name. Also, even though this is an example, *a.pdf* does not 
 exist, but we do have *solr-word.pdf* now.
 So, this should probably say:
 {quote}
 java -Durl=http://localhost:8983/solr/techproducts/update/extract 
 -Dparams=literal.id=pdf1 -Dtype=application/pdf -jar post.jar solr-word.pdf
 {quote}
 Also, it is worth mentioning (if true) that specifying *-Durl* overrides 
 *-Dc*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273121#comment-14273121
 ] 

Alexandre Rafalovitch commented on SOLR-6959:
-

Actually, these days, these two handlers are commented out in the source code 
and are instead hard-coded as an implicit handler.  Causing confusion of their 
own (SOLR-6938). FWIW.

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.3
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools
 Fix For: 5.0, Trunk


 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6959:
---
Fix Version/s: Trunk
   5.0

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.3
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools
 Fix For: 5.0, Trunk


 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6959:
---
Affects Version/s: (was: 5.0)
   4.10.3

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.3
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools
 Fix For: 5.0, Trunk


 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6963) Upgrade hadoop version to 2.3

2015-01-11 Thread Gregory Chanan (JIRA)
Gregory Chanan created SOLR-6963:


 Summary: Upgrade hadoop version to 2.3
 Key: SOLR-6963
 URL: https://issues.apache.org/jira/browse/SOLR-6963
 Project: Solr
  Issue Type: Task
  Components: Hadoop Integration
Reporter: Gregory Chanan
Assignee: Gregory Chanan


See SOLR-6915; we need at least hadoop version 2.3 to be able to use the 
MiniKdc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6959) SimplePostTool reports incorrect base url for PDFs

2015-01-11 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273174#comment-14273174
 ] 

Erik Hatcher commented on SOLR-6959:


bq. Except this now uncovers a little wrinkle...

ok, ok!  :)  dang you're thorough, and thanks for that seriously.  aligned to 
application/xml.  no (good) reason they were different.

 SimplePostTool reports incorrect base url for PDFs
 --

 Key: SOLR-6959
 URL: https://issues.apache.org/jira/browse/SOLR-6959
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.3
Reporter: Alexandre Rafalovitch
Assignee: Erik Hatcher
Priority: Minor
  Labels: tools
 Fix For: 5.0, Trunk


 {quote}
 $ java -Dc=techproducts -Dauto -Dcommit=no -jar post.jar solr-word.pdf
 SimplePostTool version 1.5
 Posting files to base url http://localhost:8983/solr/techproducts/update..
 {quote}
 This command will *not* post to */update*, it will post to */update/extract*. 
 This should be reported correspondingly.
 From the server log:
 {quote}
 127.0.0.1 -  -  \[11/Jan/2015:17:17:10 +] POST 
 /solr/techproducts/update/extract?resource.name=
 {quote}
 It would make sense for that message to be after the auto-mode determination 
 just before the actual POST.
 Also, what's with two dots after the url? If it is _etc_, it should probably 
 be three dots.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6176) Modify FSIndexOutpue in FSDirectory to pen output steam for Write and Read

2015-01-11 Thread Wojtek Kozaczynski (JIRA)
Wojtek Kozaczynski created LUCENE-6176:
--

 Summary: Modify FSIndexOutpue in FSDirectory to pen output steam 
for Write and Read
 Key: LUCENE-6176
 URL: https://issues.apache.org/jira/browse/LUCENE-6176
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Affects Versions: 4.10.2
 Environment: Windows
Reporter: Wojtek Kozaczynski
 Fix For: 4.10.2


The FSIndexOutput, in FSDirecotry, opens the output file stream for 
Write/Append (W/A), but no Read. This is an issue when Windos wites to remote 
files. For local storage files the Windows cache manager is part of the kernel 
and can read from the file even if it is opened for W/A only (and it needs to 
read the current content of the page). When accessing remote files, like SMB 
shares, the cache manager is restricted to the access mode requested from the 
remote system. In this case since it is W/A every write, even a single byte, is 
a roundtrip to the remote storage server. 


Openning the output file stream for Write and Read, which does not impact other 
functionality, allows Windows to cache the individual Lucene writes regadless 
of their size



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6870) remove/fix currently broken solr/site/html/tutorial.html ?

2015-01-11 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-6870:
--

Assignee: Erik Hatcher

 remove/fix currently broken solr/site/html/tutorial.html ?
 --

 Key: SOLR-6870
 URL: https://issues.apache.org/jira/browse/SOLR-6870
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Erik Hatcher
Priority: Blocker
 Fix For: 5.0


 {{solr/site/html/tutorial.html}} still exists in the source repository, and 
 is still being included  linked in the published javadocs, and being 
 mentioned in various README.txt files -- even though SOLR-6058 obsoleted this 
 file with https://lucene.apache.org/solr/quickstart.html
 We either need to clean this file up, or update it to reflect reality (ie: 
 {{bin/solr -e foo}} ... if we do remove it, we need to audit the various 
 README.txt files and ensure they refer people to the correct place to find 
 the tutorial



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >