[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance.

2015-01-30 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298759#comment-14298759
 ] 

Mark Miller commented on SOLR-4509:
---

You are probably seeing SOLR-6944.

 Disable HttpClient stale check for performance.
 ---

 Key: SOLR-4509
 URL: https://issues.apache.org/jira/browse/SOLR-4509
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: 5 node SmartOS cluster (all nodes living in same global 
 zone - i.e. same physical machine)
Reporter: Ryan Zezeski
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 baremetal-stale-nostale-med-latency.dat, 
 baremetal-stale-nostale-med-latency.svg, 
 baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg


 By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
 increase in throughput and reduction of over 100ms.  This patch was made in 
 the context of a project I'm leading, called Yokozuna, which relies on 
 distributed search.
 Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
 Here's a write-up I did on my findings: 
 http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
 I'm happy to answer any questions or make changes to the patch to make it 
 acceptable.
 ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7036) Faster method for group.facet

2015-01-30 Thread Adrien Brault (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298863#comment-14298863
 ] 

Adrien Brault commented on SOLR-7036:
-

[~jimtronic] I did not. After trying again, with group.facet.method=fc the 
performance is similar to SOLR-4763

 Faster method for group.facet
 -

 Key: SOLR-7036
 URL: https://issues.apache.org/jira/browse/SOLR-7036
 Project: Solr
  Issue Type: Improvement
  Components: faceting
Affects Versions: 4.10.3
Reporter: Jim Musil
 Fix For: 4.10.4

 Attachments: SOLR-7036.patch


 This is a patch that speeds up the performance of requests made with 
 group.facet=true. The original code that collects and counts unique facet 
 values for each group does not use the same improved field cache methods that 
 have been added for normal faceting in recent versions.
 Specifically, this approach leverages the UninvertedField class which 
 provides a much faster way to look up docs that contain a term. I've also 
 added a simple grouping map so that when a term is found for a doc, it can 
 quickly look up the group to which it belongs.
 Group faceting was very slow for our data set and when the number of docs or 
 terms was high, the latency spiked to multiple second requests. This solution 
 provides better overall performance -- from an average of 54ms to 32ms. It 
 also dropped our slowest performing queries way down -- from 6012ms to 991ms.
 I also added a few tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6210) Unit tests failures in TestLucene40DocValuesFormat/TestDocValuesFormat

2015-01-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-6210:
-
Fix Version/s: 4.10.4

 Unit tests failures in TestLucene40DocValuesFormat/TestDocValuesFormat
 --

 Key: LUCENE-6210
 URL: https://issues.apache.org/jira/browse/LUCENE-6210
 Project: Lucene - Core
  Issue Type: Test
Affects Versions: 4.10.3
Reporter: Hrishikesh Gadre
 Fix For: 4.10.4


 Following unit tests are consistently failing in my dev environment.
 ant test  -Dtestcase=TestDocValuesFormat -Dtests.method=testMergeStability 
 -Dtests.seed=677104CE0E32AC16 -Dtests.slow=true -Dtests.locale=sl_SI 
 -Dtests.timezone=Africa/Conakry -Dtests.asserts=true 
 -Dtests.file.encoding=ISO-8859-1
 ant test  -Dtestcase=TestLucene40DocValuesFormat 
 -Dtests.method=testMergeStability -Dtests.seed=677104CE0E32AC16 
 -Dtests.slow=true -Dtests.locale=es_SV -Dtests.timezone=Atlantic/Cape_Verde 
 -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6212) Remove IndexWriter's per-document analyzer add/updateDocument APIs

2015-01-30 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298893#comment-14298893
 ] 

Ryan Ernst commented on LUCENE-6212:


+1

 Remove IndexWriter's per-document analyzer add/updateDocument APIs
 --

 Key: LUCENE-6212
 URL: https://issues.apache.org/jira/browse/LUCENE-6212
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6212.patch


 IndexWriter already takes an analyzer up-front (via
 IndexWriterConfig), but it also allows you to specify a different one
 for each add/updateDocument.
 I think this is quite dangerous/trappy since it means you can easily
 index tokens for that document that don't match at search-time based
 on the search-time analyzer.
 I think we should remove this trap in 5.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7036) Faster method for group.facet

2015-01-30 Thread Adrien Brault (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298737#comment-14298737
 ] 

Adrien Brault commented on SOLR-7036:
-

Just tried with our dataset and main query, this is 2x slower than SOLR-4763

 Faster method for group.facet
 -

 Key: SOLR-7036
 URL: https://issues.apache.org/jira/browse/SOLR-7036
 Project: Solr
  Issue Type: Improvement
  Components: faceting
Affects Versions: 4.10.3
Reporter: Jim Musil
 Fix For: 4.10.4

 Attachments: SOLR-7036.patch


 This is a patch that speeds up the performance of requests made with 
 group.facet=true. The original code that collects and counts unique facet 
 values for each group does not use the same improved field cache methods that 
 have been added for normal faceting in recent versions.
 Specifically, this approach leverages the UninvertedField class which 
 provides a much faster way to look up docs that contain a term. I've also 
 added a simple grouping map so that when a term is found for a doc, it can 
 quickly look up the group to which it belongs.
 Group faceting was very slow for our data set and when the number of docs or 
 terms was high, the latency spiked to multiple second requests. This solution 
 provides better overall performance -- from an average of 54ms to 32ms. It 
 also dropped our slowest performing queries way down -- from 6012ms to 991ms.
 I also added a few tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4509) Disable HttpClient stale check for performance.

2015-01-30 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4509:
--
Attachment: SOLR-4509.patch

Here is a first pass at shrinking this patch to trunk now that all the issues I 
spun off are committed.

It is not complete anymore because of the change to Jetty 9.

 Disable HttpClient stale check for performance.
 ---

 Key: SOLR-4509
 URL: https://issues.apache.org/jira/browse/SOLR-4509
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: 5 node SmartOS cluster (all nodes living in same global 
 zone - i.e. same physical machine)
Reporter: Ryan Zezeski
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 SOLR-4509.patch, baremetal-stale-nostale-med-latency.dat, 
 baremetal-stale-nostale-med-latency.svg, 
 baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg


 By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
 increase in throughput and reduction of over 100ms.  This patch was made in 
 the context of a project I'm leading, called Yokozuna, which relies on 
 distributed search.
 Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
 Here's a write-up I did on my findings: 
 http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
 I'm happy to answer any questions or make changes to the patch to make it 
 acceptable.
 ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6944) ReplicationFactorTest and HttpPartitionTest both fail with org.apache.http.NoHttpResponseException: The target server failed to respond

2015-01-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298865#comment-14298865
 ] 

ASF subversion and git services commented on SOLR-6944:
---

Commit 1656056 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1656056 ]

SOLR-6944: ReplicationFactorTest and HttpPartitionTest both fail with 
org.apache.http.NoHttpResponseException: The target server failed to respond

 ReplicationFactorTest and HttpPartitionTest both fail with 
 org.apache.http.NoHttpResponseException: The target server failed to respond
 ---

 Key: SOLR-6944
 URL: https://issues.apache.org/jira/browse/SOLR-6944
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6944.patch


 Our only recourse is to do a client side retry on such errors. I have some 
 retry code for this from SOLR-4509 that I will pull over here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6944) ReplicationFactorTest and HttpPartitionTest both fail with org.apache.http.NoHttpResponseException: The target server failed to respond

2015-01-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298879#comment-14298879
 ] 

ASF subversion and git services commented on SOLR-6944:
---

Commit 1656059 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1656059 ]

SOLR-6944: ReplicationFactorTest and HttpPartitionTest both fail with 
org.apache.http.NoHttpResponseException: The target server failed to respond

 ReplicationFactorTest and HttpPartitionTest both fail with 
 org.apache.http.NoHttpResponseException: The target server failed to respond
 ---

 Key: SOLR-6944
 URL: https://issues.apache.org/jira/browse/SOLR-6944
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6944.patch


 Our only recourse is to do a client side retry on such errors. I have some 
 retry code for this from SOLR-4509 that I will pull over here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7036) Faster method for group.facet

2015-01-30 Thread Jim Musil (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298778#comment-14298778
 ] 

Jim Musil commented on SOLR-7036:
-

Did you set group.facet.method=fc?

Jim Musil



 Faster method for group.facet
 -

 Key: SOLR-7036
 URL: https://issues.apache.org/jira/browse/SOLR-7036
 Project: Solr
  Issue Type: Improvement
  Components: faceting
Affects Versions: 4.10.3
Reporter: Jim Musil
 Fix For: 4.10.4

 Attachments: SOLR-7036.patch


 This is a patch that speeds up the performance of requests made with 
 group.facet=true. The original code that collects and counts unique facet 
 values for each group does not use the same improved field cache methods that 
 have been added for normal faceting in recent versions.
 Specifically, this approach leverages the UninvertedField class which 
 provides a much faster way to look up docs that contain a term. I've also 
 added a simple grouping map so that when a term is found for a doc, it can 
 quickly look up the group to which it belongs.
 Group faceting was very slow for our data set and when the number of docs or 
 terms was high, the latency spiked to multiple second requests. This solution 
 provides better overall performance -- from an average of 54ms to 32ms. It 
 also dropped our slowest performing queries way down -- from 6012ms to 991ms.
 I also added a few tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6944) ReplicationFactorTest and HttpPartitionTest both fail with org.apache.http.NoHttpResponseException: The target server failed to respond

2015-01-30 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-6944:
--
Attachment: SOLR-6944.patch

I've extracted the changes from SOLR-4509 and attatched.

 ReplicationFactorTest and HttpPartitionTest both fail with 
 org.apache.http.NoHttpResponseException: The target server failed to respond
 ---

 Key: SOLR-6944
 URL: https://issues.apache.org/jira/browse/SOLR-6944
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6944.patch


 Our only recourse is to do a client side retry on such errors. I have some 
 retry code for this from SOLR-4509 that I will pull over here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6944) ReplicationFactorTest and HttpPartitionTest both fail with org.apache.http.NoHttpResponseException: The target server failed to respond

2015-01-30 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298827#comment-14298827
 ] 

Mark Miller commented on SOLR-6944:
---

I'll commit this and see how jenkins responds.

 ReplicationFactorTest and HttpPartitionTest both fail with 
 org.apache.http.NoHttpResponseException: The target server failed to respond
 ---

 Key: SOLR-6944
 URL: https://issues.apache.org/jira/browse/SOLR-6944
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6944.patch


 Our only recourse is to do a client side retry on such errors. I have some 
 retry code for this from SOLR-4509 that I will pull over here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7067) bin/solr won't run under bash 4.3 (OS X 10.10.2)

2015-01-30 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-7067:
-
Attachment: SOLR-7067.patch

Updated patch.

[~thelabdude] told me offline that with the original patch, {{bin/solr 
healthcheck -c whatever}} fails when the war has not yet been unpacked.  I 
reproduced:

{noformat}
$ bin/solr healthcheck -c testing
bin/solr: line 386: 
/Library/Java/JavaVirtualMachines/jdk1.8.0_20.jdk/Contents/Home/bin/jar xf: No 
such file or directory
Error: Could not find or load main class org.apache.solr.util.SolrCLI
{noformat}

This patch fixes that problem, and also switches from using {{command -v}} to 
{{hash}} to check for executables, which the above-linked SO answer says is the 
best way from bash scripts (which {{bin/solr}} is).

This patch also renames {{$unzipCommand}} to {{$UNPACK_WAR_CMD}}, and adds 
{{-q}} to the {{unzip}} command, so that it's quiet (like {{jar xf}} is).

 bin/solr won't run under bash 4.3 (OS X 10.10.2)
 

 Key: SOLR-7067
 URL: https://issues.apache.org/jira/browse/SOLR-7067
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, Trunk, 5.1
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Blocker
 Fix For: 5.0, Trunk, 5.1

 Attachments: SOLR-7067.patch, SOLR-7067.patch


 I upgraded to OS X Yosemite 10.10.2 today, and the bash version went from 
 {{3.2.53(1)-release (x86_64-apple-darwin14)}} on 10.10.1 to 
 {{4.3.30(1)-release (x86_64-apple-darwin14.0.0)}}.
 When I try to run {{bin/solr}}, I get:
 {noformat}
 bin/solr: line 55: [: is: binary operator expected
 bin/solr: line 58: [: is: binary operator expected
 This script requires extracting a WAR file with either the jar or unzip 
 utility, please install these utilities or contact your administrator for 
 assistance.
 {noformat}
 the relevant section of the script is:
 {code}
 52: hasJar=$(which jar 2/dev/null)
 53: hasUnzip=$(which unzip 2/dev/null)
 54: 
 55: if [ ${hasJar} ]; then
 56:   unzipCommand=$hasJar xf
 57: else
 58:   if [ ${hasUnzip} ]; then
 59: unzipCommand=$hasUnzip
 60:   else
 61: echo -e This script requires extracting a WAR file with either the 
 jar or unzip utility, please install these utilities or contact your 
 administrator for assistance.
 62: exit 1
 63:   fi
 64: fi
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7065) Let a replica become the leader regardless of it's last published state if all replicas participate in the election process.

2015-01-30 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299266#comment-14299266
 ] 

Ramkumar Aiyengar commented on SOLR-7065:
-

bq. not very good for dealing with priorities

In particular, the follow the leader approach makes it very hard to do things 
like assign arbitrary priorities to nodes (the current jump the queue stuff 
works because there are only two priorities, extending it to more will make it 
untenable very soon). You could on the other hand come up with a solution to do 
that with optimistic locking..

 Let a replica become the leader regardless of it's last published state if 
 all replicas participate in the election process.
 

 Key: SOLR-7065
 URL: https://issues.apache.org/jira/browse/SOLR-7065
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-7065.patch, SOLR-7065.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_31) - Build # 4346 - Still Failing!

2015-01-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4346/
Java: 32bit/jdk1.8.0_31 -client -XX:+UseParallelGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DistribCursorPagingTest

Error Message:
Some resources were not closed, shutdown, or released.

Stack Trace:
java.lang.AssertionError: Some resources were not closed, shutdown, or released.
at __randomizedtesting.SeedInfo.seed([60C5A4690D835AC7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:213)
at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DistribCursorPagingTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.DistribCursorPagingTest
 60C5A4690D835AC7-001\tempDir-001\jetty4\tlog\tlog.004: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.DistribCursorPagingTest
 60C5A4690D835AC7-001\tempDir-001\jetty4\tlog\tlog.004: The 
process cannot access the file because it is being used by another process. 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.DistribCursorPagingTest
 60C5A4690D835AC7-001\tempDir-001\jetty4\tlog: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.DistribCursorPagingTest
 60C5A4690D835AC7-001\tempDir-001\jetty4\tlog
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.DistribCursorPagingTest
 60C5A4690D835AC7-001\tempDir-001\jetty4: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.DistribCursorPagingTest
 60C5A4690D835AC7-001\tempDir-001\jetty4
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.DistribCursorPagingTest
 60C5A4690D835AC7-001\tempDir-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.DistribCursorPagingTest
 60C5A4690D835AC7-001\tempDir-001
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.DistribCursorPagingTest
 60C5A4690D835AC7-001: java.nio.file.DirectoryNotEmptyException: 

[jira] [Created] (LUCENE-6213) Add test for IndexFormatTooOldException if a commit has a 3.x segment

2015-01-30 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6213:
---

 Summary: Add test for IndexFormatTooOldException if a commit has a 
3.x segment
 Key: LUCENE-6213
 URL: https://issues.apache.org/jira/browse/LUCENE-6213
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


We should add a 4.x index (4.x commit) with some 3.x segment(s) to our 
backwards tests.

I don't think we throw IndexFormatTooOldException correctly in this case. I 
think instead the user will get a confusing SPI error about a missing codec 
Lucene3x.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7067) bin/solr won't run under bash 4.3 (OS X 10.10.2)

2015-01-30 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-7067:


 Summary: bin/solr won't run under bash 4.3 (OS X 10.10.2)
 Key: SOLR-7067
 URL: https://issues.apache.org/jira/browse/SOLR-7067
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, Trunk, 5.1
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Blocker
 Fix For: 5.0, Trunk, 5.1


I upgraded to OS X Yosemite 10.10.2 today, and the bash version went from 
{{3.2.53(1)-release (x86_64-apple-darwin14)}} on 10.10.1 to {{4.3.30(1)-release 
(x86_64-apple-darwin14.0.0)}}.

When I try to run {{bin/solr}}, I get:

{noformat}
bin/solr: line 55: [: is: binary operator expected
bin/solr: line 58: [: is: binary operator expected
This script requires extracting a WAR file with either the jar or unzip 
utility, please install these utilities or contact your administrator for 
assistance.
{noformat}

the relevant section of the script is:

{code}
52: hasJar=$(which jar 2/dev/null)
53: hasUnzip=$(which unzip 2/dev/null)
54: 
55: if [ ${hasJar} ]; then
56:   unzipCommand=$hasJar xf
57: else
58:   if [ ${hasUnzip} ]; then
59: unzipCommand=$hasUnzip
60:   else
61: echo -e This script requires extracting a WAR file with either the jar 
or unzip utility, please install these utilities or contact your administrator 
for assistance.
62: exit 1
63:   fi
64: fi
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6919) Log REST info before executing

2015-01-30 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-6919:

Attachment: SOLR-6919.patch

An example of the logging:

{noformat}
2997 T13 C0 oasc.SolrCore.execute DEBUG [collection1] webapp=null path=null 
params={q=*:*wt=xml}
3037 T13 C0 oasc.SolrCore.execute [collection1] webapp=null path=null 
params={q=*:*wt=xml} hits=0 status=0 QTime=41
{noformat}

I pulled this out of {{tests-report.txt}} so the format might not be exactly 
the same as on a production system, but the content is mostly there. The first 
line is the line I added, which happens before the query exceutes. The second 
line already exists, and is logged after processing completes. These two lines 
are very similar.

One advantage of logging this local to Solr is that it will help correlate 
events when troubleshooting. If several requests come in near the same time, it 
may not be clear which one caused isseus if they are all elsewhere (i.e. in 
container logs).

I've added a simple test to ensure that the logging occurs, but it might be a 
good idea to test with a more complicated query set to see if you get better 
results.

 Log REST info before executing
 --

 Key: SOLR-6919
 URL: https://issues.apache.org/jira/browse/SOLR-6919
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Mike Drob
Assignee: Gregory Chanan
Priority: Minor
 Attachments: SOLR-6919.patch, SOLR-6919.patch


 We should log request info (path, parameters, etc...) before attempting to 
 execute a query. This is helpful in cases where we get a bad query that 
 causes OOM or something else catastrophic, and are doing post-event triage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7067) bin/solr won't run under bash 4.3 (OS X 10.10.2)

2015-01-30 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-7067:
-
Attachment: SOLR-7067.patch

Patch addressing the issue.

I also took the opportunity to switch away from usage of {{which}} to discover 
whether executables exist ({{which}} is apparently not very portable), and 
instead used POSIX-compliant {{command -v}}.  See the first answer to this 
StackOverflow post: 
[http://stackoverflow.com/questions/592620/check-if-a-program-exists-from-a-bash-script].

Can somebody do a sanity check on OS X 10.10.1 and other Unix-ish platforms?  
I'll check on Debian (not sure what version I have ATM) in a minute.

I want to get this into 5.0.

 bin/solr won't run under bash 4.3 (OS X 10.10.2)
 

 Key: SOLR-7067
 URL: https://issues.apache.org/jira/browse/SOLR-7067
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, Trunk, 5.1
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Blocker
 Fix For: 5.0, Trunk, 5.1

 Attachments: SOLR-7067.patch


 I upgraded to OS X Yosemite 10.10.2 today, and the bash version went from 
 {{3.2.53(1)-release (x86_64-apple-darwin14)}} on 10.10.1 to 
 {{4.3.30(1)-release (x86_64-apple-darwin14.0.0)}}.
 When I try to run {{bin/solr}}, I get:
 {noformat}
 bin/solr: line 55: [: is: binary operator expected
 bin/solr: line 58: [: is: binary operator expected
 This script requires extracting a WAR file with either the jar or unzip 
 utility, please install these utilities or contact your administrator for 
 assistance.
 {noformat}
 the relevant section of the script is:
 {code}
 52: hasJar=$(which jar 2/dev/null)
 53: hasUnzip=$(which unzip 2/dev/null)
 54: 
 55: if [ ${hasJar} ]; then
 56:   unzipCommand=$hasJar xf
 57: else
 58:   if [ ${hasUnzip} ]; then
 59: unzipCommand=$hasUnzip
 60:   else
 61: echo -e This script requires extracting a WAR file with either the 
 jar or unzip utility, please install these utilities or contact your 
 administrator for assistance.
 62: exit 1
 63:   fi
 64: fi
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_31) - Build # 11716 - Failure!

2015-01-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11716/
Java: 32bit/jdk1.8.0_31 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:56677/collection1]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:56677/collection1]
at 
__randomizedtesting.SeedInfo.seed([4A7CB53287CA2DD5:C2288AE82936402D]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:349)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1009)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:309)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryServer(AbstractFullDistribZkTestBase.java:1384)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertPartialResults(CloudExitableDirectoryReaderTest.java:102)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:74)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test(CloudExitableDirectoryReaderTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
  

[jira] [Created] (SOLR-7066) autoAddReplicas feature has bug when selecting replacement nodes.

2015-01-30 Thread Mark Miller (JIRA)
Mark Miller created SOLR-7066:
-

 Summary: autoAddReplicas feature has bug when selecting 
replacement nodes.
 Key: SOLR-7066
 URL: https://issues.apache.org/jira/browse/SOLR-7066
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.1


I was comparing to some of the work I have for this downstream, and I missed 
pulling in some improvements - additional testing, debug logging, and a bug fix 
around selecting a replacement node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7065) Let a replica become the leader regardless of it's last published state if all replicas participate in the election process.

2015-01-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299026#comment-14299026
 ] 

Erick Erickson edited comment on SOLR-7065 at 1/30/15 6:52 PM:
---

Yeah, I started to take a whack at it at one point, basically taking control of 
the ordering of the election queue but abandoned it due to time constraints. 
One problem is that we're bastardizing the whole ephemeral election process in 
ZK and resorting to the tie breaker code that does things like find the next 
guy and jump down two, unless you're within the first two of the head in which 
case do nothing. And the sorting is sensitive to the session ID to boot. 

The TestRebalanceLeaders code exercises the shard leader election, we can see 
if we can extend it. I'm not sure how robust it is when nodes are flaky.

You mentioned at one point that you wondered whether the whole watch the guy 
in front and ZKs ephemeral-sequential node was the right way to approach this. 
The hack I started still used that mechanism, just took better control of how 
nodes were inserted into the leader election queue so I don't think that 
approach really addresses why this has spun out of control.

I really wonder if we should change the mechanism. It seems to me that the 
fundamental fragility (apart from how hard the code is to understand) is that 
if the sequence of who watches which ephemeral node somehow gets out of whack, 
there is no mechanism for letting the _other_ nodes in the queue know that 
there's a problem that needs to be sorted out which can result in no leaders I 
assume. Certainly happened often enough to me.

I wonder if tying leader election into ZK state changes rather than watching 
the ephemeral election node-in-front is a better way?

This has _not_ been thought out, but what about something like:

Solr gets a notification of state change from ZK and drops into the should I 
be leader code which gets significantly less complex.
  -1 If I'm not active, ??? Probably just return assuming the next state 
change will re-trigger this code.
  0 If I'm not in the election queue, put myself at the tail. (handles 
mysterious out-of-whack situations)
  1 If there is a leader and it's active, return. (if it's in the middle of 
going down, we should get another state change when it's down, right?)
  2a If some other node is both active and the preferred leader return (again 
depending on a state change message if that node goes down to get back to this 
code)
  2b If I'm the preferred leader, take over leadership.
  3 If any other node in the leader election queue in front of me is active, 
return (state change gets us back here if those nodes are going down).
  4 take over leadership.

Since this operates off of state changes to ZK, it seems like it gives us the 
chance to recover from weird situations. I don't _think_ it increases traffic, 
don't all ZK state changes have to go to all nodes anyway?

I'm not sure in this case whether we even need a leader election queue at all. 
Is the clusterstate any less robust than the election queue? Even if it would 
be just as good, not sure how you'd express the node in front. Actually, a 
simple counter property in the state for each replica would do it maybe. You'd 
set it at one more than any other node in the collection when a node changed 
its state to active. I'll freely admit though, you've seen a lot more in the 
weeds here than I have so I'll defer to your experience.

Anyway, let's kick the tires of what's to be done, maybe we can tag-team this. 
I consider the above just a jumping-off point to tame this beast. Be glad to 
chat if you or anyone else wants to kick it around...

One thing I'm not real clear on is how up-to-date the ZK cluster state is. 
Since changing the state is done through the Overseer, how to insure that the 
state is current when making decisions?


was (Author: erickerickson):
Yeah, I started to take a whack at it at one point, basically taking control of 
the ordering of the election queue but abandoned it due to time constraints. 
One problem is that we're bastardizing the whole ephemeral election process in 
ZK and resorting to the tie breaker code that does things like find the next 
guy and jump down two, unless you're within the first two of the head in which 
case do nothing. And the sorting is sensitive to the session ID to boot. 

The TestRebalanceLeaders code exercises the shard leader election, we can see 
if we can extend it. I'm not sure how robust it is when nodes are flaky.

You mentioned at one point that you wondered whether the whole watch the guy 
in front and ZKs ephemeral-sequential node was the right way to approach this. 
The hack I started still used that mechanism, just took better control of how 
nodes were inserted into the leader election queue so I don't think that 
approach really addresses why this has spun out 

[jira] [Commented] (SOLR-7065) Let a replica become the leader regardless of it's last published state if all replicas participate in the election process.

2015-01-30 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299052#comment-14299052
 ] 

Mark Miller commented on SOLR-7065:
---

bq. You mentioned at one point that you wondered whether the whole watch the 
guy in front and ZKs ephemeral-sequential node was the right way to approach 
this. 

Right. This entire approach is an elegant ZooKeeper recipe that is actually 
quite difficult to program perfectly. It's point is to prevent a thundering 
herd problem when you have tons of nodes involved in the election - with 
simpler approaches, if a leader goes down, now you can have everyone in the 
election checking the same nodes about what has changed and this can cause 
problems. Except that you never have more than a handful of replicas. Even 20 
replicas is kind of crazy, and it's still not even close to a herd.

This elegant solution is hard to nail, hard to test properly, and as can be 
seen, not very good for dealing with priorities and altering the election line.

A very simple solution that involves the overseer or optimistic locking / 
writing would be much, much simpler for re ordering the election.

 Let a replica become the leader regardless of it's last published state if 
 all replicas participate in the election process.
 

 Key: SOLR-7065
 URL: https://issues.apache.org/jira/browse/SOLR-7065
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-7065.patch, SOLR-7065.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7066) autoAddReplicas feature has bug when selecting replacement nodes.

2015-01-30 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-7066:
--
Attachment: SOLR-7066.patch

First patch attached.

 autoAddReplicas feature has bug when selecting replacement nodes.
 -

 Key: SOLR-7066
 URL: https://issues.apache.org/jira/browse/SOLR-7066
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.1

 Attachments: SOLR-7066.patch


 I was comparing to some of the work I have for this downstream, and I missed 
 pulling in some improvements - additional testing, debug logging, and a bug 
 fix around selecting a replacement node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6880) ZKStateReader makes a call to updateWatchedCollection, which doesn't accept null with a method creating the argument that can return null.

2015-01-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299040#comment-14299040
 ] 

ASF subversion and git services commented on SOLR-6880:
---

Commit 1656090 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1656090 ]

SOLR-6880: Remove this assert that can fail tests.

 ZKStateReader makes a call to updateWatchedCollection, which doesn't accept 
 null with a method creating the argument that can return null.
 --

 Key: SOLR-6880
 URL: https://issues.apache.org/jira/browse/SOLR-6880
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6880.patch, SOLR-6880.patch


 I've seen the resulting NPE in tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6880) ZKStateReader makes a call to updateWatchedCollection, which doesn't accept null with a method creating the argument that can return null.

2015-01-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299043#comment-14299043
 ] 

ASF subversion and git services commented on SOLR-6880:
---

Commit 1656091 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1656091 ]

SOLR-6880: Remove this assert that can fail tests.

 ZKStateReader makes a call to updateWatchedCollection, which doesn't accept 
 null with a method creating the argument that can return null.
 --

 Key: SOLR-6880
 URL: https://issues.apache.org/jira/browse/SOLR-6880
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6880.patch, SOLR-6880.patch


 I've seen the resulting NPE in tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4446) exception swallowed, NPE created upon trouble getting JNDI connection

2015-01-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299053#comment-14299053
 ] 

Erick Erickson commented on SOLR-4446:
--

Anyone is free to submit a patch, simply create a logon and attach it to this 
JIRA.

 exception swallowed, NPE created upon trouble getting JNDI connection
 -

 Key: SOLR-4446
 URL: https://issues.apache.org/jira/browse/SOLR-4446
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.1
Reporter: Ken Geis

 I am trying to use a JNDI connection, but an error occurs getting the 
 connection. The error is swallowed and obscured by a NullPointerException. 
 See comments inline below.
 {code:title=JdbcDataSource.java}
   protected CallableConnection createConnectionFactory(final Context 
 context,
final Properties initProps) {
 ...
 final String jndiName = initProps.getProperty(JNDI_NAME);
 final String url = initProps.getProperty(URL); /* is null */
 final String driver = initProps.getProperty(DRIVER); /* is null */
 ...
 return factory = new CallableConnection() {
   @Override
   public Connection call() throws Exception {
 ...
 try {
   if(url != null){
 c = DriverManager.getConnection(url, initProps);
   } else if(jndiName != null){
 ...
 /* error occurs */
 ...
   }
 } catch (SQLException e) {
 /* exception handler assumes that try block followed url != null path; in 
 the JNDI case, driver is null, and DocBuilder.loadClass(..) throws a NPE */
   Driver d = (Driver) DocBuilder.loadClass(driver, 
 context.getSolrCore()).newInstance();
   c = d.connect(url, initProps);
 }
 ...
   }
 };
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7065) Let a replica become the leader regardless of it's last published state if all replicas participate in the election process.

2015-01-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299026#comment-14299026
 ] 

Erick Erickson commented on SOLR-7065:
--

Yeah, I started to take a whack at it at one point, basically taking control of 
the ordering of the election queue but abandoned it due to time constraints. 
One problem is that we're bastardizing the whole ephemeral election process in 
ZK and resorting to the tie breaker code that does things like find the next 
guy and jump down two, unless you're within the first two of the head in which 
case do nothing. And the sorting is sensitive to the session ID to boot. 

The TestRebalanceLeaders code exercises the shard leader election, we can see 
if we can extend it. I'm not sure how robust it is when nodes are flaky.

You mentioned at one point that you wondered whether the whole watch the guy 
in front and ZKs ephemeral-sequential node was the right way to approach this. 
The hack I started still used that mechanism, just took better control of how 
nodes were inserted into the leader election queue so I don't think that 
approach really addresses why this has spun out of control.

I really wonder if we should change the mechanism. It seems to me that the 
fundamental fragility (apart from how hard the code is to understand) is that 
if the sequence of who watches which ephemeral node somehow gets out of whack, 
there is no mechanism for letting the _other_ nodes in the queue know that 
there's a problem that needs to be sorted out which can result in no leaders I 
assume. Certainly happened often enough to me.

I wonder if tying leader election into ZK state changes rather than watching 
the ephemeral election node-in-front is a better way?

This has _not_ been thought out, but what about something like:

Solr gets a notification of state change from ZK and drops into the should I 
be leader code which gets significantly less complex.
  -1 If I'm not active, ??? Probably just return assuming the next state 
change will re-trigger this code.
  0 If I'm not in the election queue, put myself at the tail. (handles 
mysterious out-of-whack situations)
  1 If there is a leader and it's active, return. (if it's in the middle of 
going down, we should get another state change when it's down, right?)
  2a If some other node is both active and the preferred leader return (again 
depending on a state change message if that node goes down to get back to this 
code)
  2b If I'm the preferred leader, take over leadership.
  3 If any other node in the leader election queue in front of me is active, 
return (state change gets us back here if those nodes are going down).
  4 take over leadership.

Since this operates off of state changes to ZK, it seems like it gives us the 
chance to recover from weird situations. I don't _think_ it increases traffic, 
don't all ZK state changes have to go to all nodes anyway?

I'm not sure in this case whether we even need a leader election queue at all. 
Is the clusterstate any less robust than the election queue? Even if it would 
be just as good, not sure how you'd express the node in front. Actually, a 
simple counter property in the state for each replica would do it maybe. You'd 
set it at one more than any other node in the collection when a node changed 
its state to active. I'll freely admit though, you've seen a lot more in the 
weeds here than I have so I'll defer to your experience.

Anyway, let's kick the tires of what's to be done, maybe we can tag-team this. 
I consider the above just a jumping-off point to tame this beast. Be glad to 
chat if you or anyone else wants to kick it around...

 Let a replica become the leader regardless of it's last published state if 
 all replicas participate in the election process.
 

 Key: SOLR-7065
 URL: https://issues.apache.org/jira/browse/SOLR-7065
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-7065.patch, SOLR-7065.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6213) Add test for IndexFormatTooOldException if a commit has a 3.x segment

2015-01-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299365#comment-14299365
 ] 

Robert Muir commented on LUCENE-6213:
-

I dont want to cause delays for 5.0, but this might be one to fix as it will 
confuse users. First we need the test to know if its really a bug, but I think 
it is.

 Add test for IndexFormatTooOldException if a commit has a 3.x segment
 -

 Key: LUCENE-6213
 URL: https://issues.apache.org/jira/browse/LUCENE-6213
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 We should add a 4.x index (4.x commit) with some 3.x segment(s) to our 
 backwards tests.
 I don't think we throw IndexFormatTooOldException correctly in this case. I 
 think instead the user will get a confusing SPI error about a missing codec 
 Lucene3x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Why would test timeout not kick in...?

2015-01-30 Thread Michael McCandless
Thanks Uwe  Dawid, next time I'll try try to SIGHUP it.

I never knew about this -DSelfDestructTime=30!

Mike McCandless

http://blog.mikemccandless.com


On Mon, Jan 26, 2015 at 3:21 PM, Dawid Weiss
dawid.we...@cs.put.poznan.pl wrote:
 If you encounter a situation like this, please stack dump the hung
 JVM, ok? (send a SIGHUP signal to it). If you can't provoke the JVM to
 dump its stack then it's very likely a JVM error. Otherwise I'll have
 something to debug.

 An alternative solution to force-kill a forked JVM (Oracle only) is to
 pass the magic switch to it:

 -DSelfDestructTimer=30

 The number is in minutes; from JDK sources:

 product(intx, SelfDestructTimer, 0,   \
   Will cause VM to terminate after a given time (in minutes)  \
   (0 means off))

 Dawid

 On Mon, Jan 26, 2015 at 8:42 PM, Uwe Schindler u...@thetaphi.de wrote:
 Hi,

 This happens in most cases under OOM situations. In that case the test 
 runner loses control and is unable to shut down. In this case it could be 
 something different, because you still see a test method in the hearbeat. 
 On OOM situations in most cases you see just the test case name and no 
 method. We have that quite often with Solr tests on Policeman Jenkins, too. 
 If you want to be sure that a build is aborted, you can also set a maximum 
 timeout for the whole build in Jenkins. Jenkins will then kill -9 the 
 whole process structure it launched. Please note that Jenkins measures the 
 whole build time, so give enough buffer.

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Michael McCandless [mailto:luc...@mikemccandless.com]
 Sent: Monday, January 26, 2015 7:10 PM
 To: Lucene/Solr dev
 Subject: Why would test timeout not kick in...?

 This test (TestCompressingTermVectorsFormat.testClone) just kept
 HEARTBEAT-ing for 2 days:

 http://build-eu-
 00.elasticsearch.org/job/lucene_linux_java8_64_test_only/26953/console

 The test class / super classes are not annotated with longer timeouts ...

 Shouldn't it have timed out at 7200 seconds?

 Mike McCandless

 http://blog.mikemccandless.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_31) - Build # 4450 - Still Failing!

2015-01-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4450/
Java: 32bit/jdk1.8.0_31 -server -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:61271/collection1]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:61271/collection1]
at 
__randomizedtesting.SeedInfo.seed([4E3CE1D03C43E1AD:C668DE0A92BF8C55]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:349)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1009)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:309)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryServer(AbstractFullDistribZkTestBase.java:1384)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertPartialResults(CloudExitableDirectoryReaderTest.java:102)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:86)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test(CloudExitableDirectoryReaderTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
   

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_31) - Build # 11714 - Failure!

2015-01-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11714/
Java: 32bit/jdk1.8.0_31 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
some core start times did not change on reload

Stack Trace:
java.lang.AssertionError: some core start times did not change on reload
at 
__randomizedtesting.SeedInfo.seed([70342C3DD4B34743:F86013E77A4F2ABB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:741)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Updated] (LUCENE-6213) Add test for IndexFormatTooOldException if a commit has a 3.x segment

2015-01-30 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-6213:
---
Attachment: unsupported.4x-with-3x-segments.zip
LUCENE-6213.patch

Here is a quick and dirty patch (against branch_5x) that adds a bwc index test 
as you suggested, and a quick fix for the bug.  I like the idea of a dummy 
codec, but didn't have time to try it.

 Add test for IndexFormatTooOldException if a commit has a 3.x segment
 -

 Key: LUCENE-6213
 URL: https://issues.apache.org/jira/browse/LUCENE-6213
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-6213.patch, unsupported.4x-with-3x-segments.zip


 We should add a 4.x index (4.x commit) with some 3.x segment(s) to our 
 backwards tests.
 I don't think we throw IndexFormatTooOldException correctly in this case. I 
 think instead the user will get a confusing SPI error about a missing codec 
 Lucene3x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6196) Include geo3d package, along with Lucene integration to make it useful

2015-01-30 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299491#comment-14299491
 ] 

Karl Wright commented on LUCENE-6196:
-

Problems have been fixed, and tests are reasonably comprehensive.


 Include geo3d package, along with Lucene integration to make it useful
 --

 Key: LUCENE-6196
 URL: https://issues.apache.org/jira/browse/LUCENE-6196
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: Karl Wright
Assignee: David Smiley
 Attachments: ShapeImpl.java, geo3d-tests.zip, geo3d.zip


 I would like to explore contributing a geo3d package to Lucene.  This can be 
 used in conjunction with Lucene search, both for generating geohashes (via 
 spatial4j) for complex geographic shapes, as well as limiting results 
 resulting from those queries to those results within the exact shape in 
 highly performant ways.
 The package uses 3d planar geometry to do its magic, which basically limits 
 computation necessary to determine membership (once a shape has been 
 initialized, of course) to only multiplications and additions, which makes it 
 feasible to construct a performant BoostSource-based filter for geographic 
 shapes.  The math is somewhat more involved when generating geohashes, but is 
 still more than fast enough to do a good job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6196) Include geo3d package, along with Lucene integration to make it useful

2015-01-30 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6196:

Attachment: (was: geo3d-tests.zip)

 Include geo3d package, along with Lucene integration to make it useful
 --

 Key: LUCENE-6196
 URL: https://issues.apache.org/jira/browse/LUCENE-6196
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: Karl Wright
Assignee: David Smiley
 Attachments: ShapeImpl.java, geo3d-tests.zip, geo3d.zip


 I would like to explore contributing a geo3d package to Lucene.  This can be 
 used in conjunction with Lucene search, both for generating geohashes (via 
 spatial4j) for complex geographic shapes, as well as limiting results 
 resulting from those queries to those results within the exact shape in 
 highly performant ways.
 The package uses 3d planar geometry to do its magic, which basically limits 
 computation necessary to determine membership (once a shape has been 
 initialized, of course) to only multiplications and additions, which makes it 
 feasible to construct a performant BoostSource-based filter for geographic 
 shapes.  The math is somewhat more involved when generating geohashes, but is 
 still more than fast enough to do a good job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_31) - Build # 11717 - Still Failing!

2015-01-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11717/
Java: 32bit/jdk1.8.0_31 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest

Error Message:
Should have had a good message here

Stack Trace:
java.lang.AssertionError: Should have had a good message here
at 
__randomizedtesting.SeedInfo.seed([F470183C1908E104:5910AC3704374971]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:125)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Updated] (LUCENE-6196) Include geo3d package, along with Lucene integration to make it useful

2015-01-30 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6196:

Attachment: (was: ShapeImpl.java)

 Include geo3d package, along with Lucene integration to make it useful
 --

 Key: LUCENE-6196
 URL: https://issues.apache.org/jira/browse/LUCENE-6196
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: Karl Wright
Assignee: David Smiley
 Attachments: ShapeImpl.java, geo3d-tests.zip, geo3d.zip


 I would like to explore contributing a geo3d package to Lucene.  This can be 
 used in conjunction with Lucene search, both for generating geohashes (via 
 spatial4j) for complex geographic shapes, as well as limiting results 
 resulting from those queries to those results within the exact shape in 
 highly performant ways.
 The package uses 3d planar geometry to do its magic, which basically limits 
 computation necessary to determine membership (once a shape has been 
 initialized, of course) to only multiplications and additions, which makes it 
 feasible to construct a performant BoostSource-based filter for geographic 
 shapes.  The math is somewhat more involved when generating geohashes, but is 
 still more than fast enough to do a good job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6196) Include geo3d package, along with Lucene integration to make it useful

2015-01-30 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6196:

Attachment: ShapeImpl.java
geo3d-tests.zip
geo3d.zip

 Include geo3d package, along with Lucene integration to make it useful
 --

 Key: LUCENE-6196
 URL: https://issues.apache.org/jira/browse/LUCENE-6196
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: Karl Wright
Assignee: David Smiley
 Attachments: ShapeImpl.java, ShapeImpl.java, geo3d-tests.zip, 
 geo3d-tests.zip, geo3d.zip


 I would like to explore contributing a geo3d package to Lucene.  This can be 
 used in conjunction with Lucene search, both for generating geohashes (via 
 spatial4j) for complex geographic shapes, as well as limiting results 
 resulting from those queries to those results within the exact shape in 
 highly performant ways.
 The package uses 3d planar geometry to do its magic, which basically limits 
 computation necessary to determine membership (once a shape has been 
 initialized, of course) to only multiplications and additions, which makes it 
 feasible to construct a performant BoostSource-based filter for geographic 
 shapes.  The math is somewhat more involved when generating geohashes, but is 
 still more than fast enough to do a good job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6196) Include geo3d package, along with Lucene integration to make it useful

2015-01-30 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6196:

Attachment: (was: geo3d.zip)

 Include geo3d package, along with Lucene integration to make it useful
 --

 Key: LUCENE-6196
 URL: https://issues.apache.org/jira/browse/LUCENE-6196
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: Karl Wright
Assignee: David Smiley
 Attachments: ShapeImpl.java, ShapeImpl.java, geo3d-tests.zip, 
 geo3d-tests.zip, geo3d.zip


 I would like to explore contributing a geo3d package to Lucene.  This can be 
 used in conjunction with Lucene search, both for generating geohashes (via 
 spatial4j) for complex geographic shapes, as well as limiting results 
 resulting from those queries to those results within the exact shape in 
 highly performant ways.
 The package uses 3d planar geometry to do its magic, which basically limits 
 computation necessary to determine membership (once a shape has been 
 initialized, of course) to only multiplications and additions, which makes it 
 feasible to construct a performant BoostSource-based filter for geographic 
 shapes.  The math is somewhat more involved when generating geohashes, but is 
 still more than fast enough to do a good job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7067) bin/solr won't run under bash 4.2+

2015-01-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299552#comment-14299552
 ] 

ASF subversion and git services commented on SOLR-7067:
---

Commit 1656137 from [~steve_rowe] in branch 'dev/branches/lucene_solr_5_0'
[ https://svn.apache.org/r1656137 ]

SOLR-7067: bin/solr won't run under bash 4.2+ (merged trunk r1656133)

 bin/solr won't run under bash 4.2+
 --

 Key: SOLR-7067
 URL: https://issues.apache.org/jira/browse/SOLR-7067
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, Trunk, 5.1
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Blocker
 Fix For: 5.0, Trunk, 5.1

 Attachments: SOLR-7067.patch, SOLR-7067.patch


 I upgraded to OS X Yosemite 10.10.2 today, and the bash version went from 
 {{3.2.53(1)-release (x86_64-apple-darwin14)}} on 10.10.1 to 
 {{4.3.30(1)-release (x86_64-apple-darwin14.0.0)}}.
 When I try to run {{bin/solr}}, I get:
 {noformat}
 bin/solr: line 55: [: is: binary operator expected
 bin/solr: line 58: [: is: binary operator expected
 This script requires extracting a WAR file with either the jar or unzip 
 utility, please install these utilities or contact your administrator for 
 assistance.
 {noformat}
 the relevant section of the script is:
 {code}
 52: hasJar=$(which jar 2/dev/null)
 53: hasUnzip=$(which unzip 2/dev/null)
 54: 
 55: if [ ${hasJar} ]; then
 56:   unzipCommand=$hasJar xf
 57: else
 58:   if [ ${hasUnzip} ]; then
 59: unzipCommand=$hasUnzip
 60:   else
 61: echo -e This script requires extracting a WAR file with either the 
 jar or unzip utility, please install these utilities or contact your 
 administrator for assistance.
 62: exit 1
 63:   fi
 64: fi
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7067) bin/solr won't run under bash 4.2+

2015-01-30 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-7067.
--
Resolution: Fixed

Committed to trunk, branch_5x and lucene_solr_5_0.

 bin/solr won't run under bash 4.2+
 --

 Key: SOLR-7067
 URL: https://issues.apache.org/jira/browse/SOLR-7067
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, Trunk, 5.1
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Blocker
 Fix For: 5.0, Trunk, 5.1

 Attachments: SOLR-7067.patch, SOLR-7067.patch


 I upgraded to OS X Yosemite 10.10.2 today, and the bash version went from 
 {{3.2.53(1)-release (x86_64-apple-darwin14)}} on 10.10.1 to 
 {{4.3.30(1)-release (x86_64-apple-darwin14.0.0)}}.
 When I try to run {{bin/solr}}, I get:
 {noformat}
 bin/solr: line 55: [: is: binary operator expected
 bin/solr: line 58: [: is: binary operator expected
 This script requires extracting a WAR file with either the jar or unzip 
 utility, please install these utilities or contact your administrator for 
 assistance.
 {noformat}
 the relevant section of the script is:
 {code}
 52: hasJar=$(which jar 2/dev/null)
 53: hasUnzip=$(which unzip 2/dev/null)
 54: 
 55: if [ ${hasJar} ]; then
 56:   unzipCommand=$hasJar xf
 57: else
 58:   if [ ${hasUnzip} ]; then
 59: unzipCommand=$hasUnzip
 60:   else
 61: echo -e This script requires extracting a WAR file with either the 
 jar or unzip utility, please install these utilities or contact your 
 administrator for assistance.
 62: exit 1
 63:   fi
 64: fi
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7067) bin/solr won't run under bash 4.2+

2015-01-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299551#comment-14299551
 ] 

ASF subversion and git services commented on SOLR-7067:
---

Commit 1656136 from [~steve_rowe] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1656136 ]

SOLR-7067: bin/solr won't run under bash 4.2+ (merged trunk r1656133)

 bin/solr won't run under bash 4.2+
 --

 Key: SOLR-7067
 URL: https://issues.apache.org/jira/browse/SOLR-7067
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, Trunk, 5.1
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Blocker
 Fix For: 5.0, Trunk, 5.1

 Attachments: SOLR-7067.patch, SOLR-7067.patch


 I upgraded to OS X Yosemite 10.10.2 today, and the bash version went from 
 {{3.2.53(1)-release (x86_64-apple-darwin14)}} on 10.10.1 to 
 {{4.3.30(1)-release (x86_64-apple-darwin14.0.0)}}.
 When I try to run {{bin/solr}}, I get:
 {noformat}
 bin/solr: line 55: [: is: binary operator expected
 bin/solr: line 58: [: is: binary operator expected
 This script requires extracting a WAR file with either the jar or unzip 
 utility, please install these utilities or contact your administrator for 
 assistance.
 {noformat}
 the relevant section of the script is:
 {code}
 52: hasJar=$(which jar 2/dev/null)
 53: hasUnzip=$(which unzip 2/dev/null)
 54: 
 55: if [ ${hasJar} ]; then
 56:   unzipCommand=$hasJar xf
 57: else
 58:   if [ ${hasUnzip} ]; then
 59: unzipCommand=$hasUnzip
 60:   else
 61: echo -e This script requires extracting a WAR file with either the 
 jar or unzip utility, please install these utilities or contact your 
 administrator for assistance.
 62: exit 1
 63:   fi
 64: fi
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7067) bin/solr won't run under bash 4.2+

2015-01-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299535#comment-14299535
 ] 

ASF subversion and git services commented on SOLR-7067:
---

Commit 1656133 from [~steve_rowe] in branch 'dev/trunk'
[ https://svn.apache.org/r1656133 ]

SOLR-7067: bin/solr won't run under bash 4.2+

 bin/solr won't run under bash 4.2+
 --

 Key: SOLR-7067
 URL: https://issues.apache.org/jira/browse/SOLR-7067
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, Trunk, 5.1
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Blocker
 Fix For: 5.0, Trunk, 5.1

 Attachments: SOLR-7067.patch, SOLR-7067.patch


 I upgraded to OS X Yosemite 10.10.2 today, and the bash version went from 
 {{3.2.53(1)-release (x86_64-apple-darwin14)}} on 10.10.1 to 
 {{4.3.30(1)-release (x86_64-apple-darwin14.0.0)}}.
 When I try to run {{bin/solr}}, I get:
 {noformat}
 bin/solr: line 55: [: is: binary operator expected
 bin/solr: line 58: [: is: binary operator expected
 This script requires extracting a WAR file with either the jar or unzip 
 utility, please install these utilities or contact your administrator for 
 assistance.
 {noformat}
 the relevant section of the script is:
 {code}
 52: hasJar=$(which jar 2/dev/null)
 53: hasUnzip=$(which unzip 2/dev/null)
 54: 
 55: if [ ${hasJar} ]; then
 56:   unzipCommand=$hasJar xf
 57: else
 58:   if [ ${hasUnzip} ]; then
 59: unzipCommand=$hasUnzip
 60:   else
 61: echo -e This script requires extracting a WAR file with either the 
 jar or unzip utility, please install these utilities or contact your 
 administrator for assistance.
 62: exit 1
 63:   fi
 64: fi
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1961 - Failure!

2015-01-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1961/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestRebalanceLeaders.test

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:55046/a_gj, http://127.0.0.1:55055/a_gj, 
http://127.0.0.1:55052/a_gj, http://127.0.0.1:55049/a_gj, 
http://127.0.0.1:55041/a_gj]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:55046/a_gj, 
http://127.0.0.1:55055/a_gj, http://127.0.0.1:55052/a_gj, 
http://127.0.0.1:55049/a_gj, http://127.0.0.1:55041/a_gj]
at 
__randomizedtesting.SeedInfo.seed([8CE1A5B33F51879A:4B59A6991ADEA62]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:349)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1009)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.TestRebalanceLeaders.issueCommands(TestRebalanceLeaders.java:280)
at 
org.apache.solr.cloud.TestRebalanceLeaders.rebalanceLeaderTest(TestRebalanceLeaders.java:107)
at 
org.apache.solr.cloud.TestRebalanceLeaders.test(TestRebalanceLeaders.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_80-ea-b05) - Build # 11553 - Failure!

2015-01-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11553/
Java: 32bit/jdk1.7.0_80-ea-b05 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
some core start times did not change on reload

Stack Trace:
java.lang.AssertionError: some core start times did not change on reload
at 
__randomizedtesting.SeedInfo.seed([7B6EDD4DA5C6FCE7:F33AE2970B3A911F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:741)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (SOLR-7067) bin/solr won't run under bash 4.3 (OS X 10.10.2)

2015-01-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299465#comment-14299465
 ] 

Steve Rowe commented on SOLR-7067:
--

Without the patch I see the same failure on Debian 7.8 - bash version 
{{4.2.37(1)-release (x86_64-pc-linux-gnu)}}.

The updated patch allows {{bin/solr}} to run for me on Debian under bash 4.2.

Committing shortly.

 bin/solr won't run under bash 4.3 (OS X 10.10.2)
 

 Key: SOLR-7067
 URL: https://issues.apache.org/jira/browse/SOLR-7067
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, Trunk, 5.1
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Blocker
 Fix For: 5.0, Trunk, 5.1

 Attachments: SOLR-7067.patch, SOLR-7067.patch


 I upgraded to OS X Yosemite 10.10.2 today, and the bash version went from 
 {{3.2.53(1)-release (x86_64-apple-darwin14)}} on 10.10.1 to 
 {{4.3.30(1)-release (x86_64-apple-darwin14.0.0)}}.
 When I try to run {{bin/solr}}, I get:
 {noformat}
 bin/solr: line 55: [: is: binary operator expected
 bin/solr: line 58: [: is: binary operator expected
 This script requires extracting a WAR file with either the jar or unzip 
 utility, please install these utilities or contact your administrator for 
 assistance.
 {noformat}
 the relevant section of the script is:
 {code}
 52: hasJar=$(which jar 2/dev/null)
 53: hasUnzip=$(which unzip 2/dev/null)
 54: 
 55: if [ ${hasJar} ]; then
 56:   unzipCommand=$hasJar xf
 57: else
 58:   if [ ${hasUnzip} ]; then
 59: unzipCommand=$hasUnzip
 60:   else
 61: echo -e This script requires extracting a WAR file with either the 
 jar or unzip utility, please install these utilities or contact your 
 administrator for assistance.
 62: exit 1
 63:   fi
 64: fi
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7067) bin/solr won't run under bash 4.2+

2015-01-30 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-7067:
-
Summary: bin/solr won't run under bash 4.2+  (was: bin/solr won't run under 
bash 4.3 (OS X 10.10.2))

 bin/solr won't run under bash 4.2+
 --

 Key: SOLR-7067
 URL: https://issues.apache.org/jira/browse/SOLR-7067
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0, Trunk, 5.1
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Blocker
 Fix For: 5.0, Trunk, 5.1

 Attachments: SOLR-7067.patch, SOLR-7067.patch


 I upgraded to OS X Yosemite 10.10.2 today, and the bash version went from 
 {{3.2.53(1)-release (x86_64-apple-darwin14)}} on 10.10.1 to 
 {{4.3.30(1)-release (x86_64-apple-darwin14.0.0)}}.
 When I try to run {{bin/solr}}, I get:
 {noformat}
 bin/solr: line 55: [: is: binary operator expected
 bin/solr: line 58: [: is: binary operator expected
 This script requires extracting a WAR file with either the jar or unzip 
 utility, please install these utilities or contact your administrator for 
 assistance.
 {noformat}
 the relevant section of the script is:
 {code}
 52: hasJar=$(which jar 2/dev/null)
 53: hasUnzip=$(which unzip 2/dev/null)
 54: 
 55: if [ ${hasJar} ]; then
 56:   unzipCommand=$hasJar xf
 57: else
 58:   if [ ${hasUnzip} ]; then
 59: unzipCommand=$hasUnzip
 60:   else
 61: echo -e This script requires extracting a WAR file with either the 
 jar or unzip utility, please install these utilities or contact your 
 administrator for assistance.
 62: exit 1
 63:   fi
 64: fi
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.0-Linux (32bit/jdk1.9.0-ea-b47) - Build # 75 - Failure!

2015-01-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.0-Linux/75/
Java: 32bit/jdk1.9.0-ea-b47 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestBlobHandler.testDistribSearch

Error Message:
{responseHeader={status=0, QTime=0}, response={numFound=0, start=0, docs=[]}}

Stack Trace:
java.lang.AssertionError: {responseHeader={status=0, QTime=0}, 
response={numFound=0, start=0, docs=[]}}
at 
__randomizedtesting.SeedInfo.seed([70D05ED4FAD29D2E:F136D0CC8D8DFD12]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.TestBlobHandler.doBlobHandlerTest(TestBlobHandler.java:96)
at 
org.apache.solr.handler.TestBlobHandler.doTest(TestBlobHandler.java:200)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:878)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_31) - Build # 4451 - Still Failing!

2015-01-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4451/
Java: 64bit/jdk1.8.0_31 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:865)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:284)
at 
org.apache.solr.cloud.ReplicationFactorTest.test(ReplicationFactorTest.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_31) - Build # 4347 - Still Failing!

2015-01-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4347/
Java: 64bit/jdk1.8.0_31 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.test

Error Message:
org.apache.http.NoHttpResponseException: The target server failed to respond

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.http.NoHttpResponseException: The target server failed to respond
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:865)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:284)
at 
org.apache.solr.cloud.ReplicationFactorTest.test(ReplicationFactorTest.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
  

[jira] [Commented] (SOLR-7059) Using paramset with multi-valued keys leads to a 500

2015-01-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299713#comment-14299713
 ] 

Noble Paul commented on SOLR-7059:
--

I guess we should just make it a MapString,Object .

 Using paramset with multi-valued keys leads to a 500
 

 Key: SOLR-7059
 URL: https://issues.apache.org/jira/browse/SOLR-7059
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.0
Reporter: Anshum Gupta
Assignee: Noble Paul
 Fix For: 5.0, Trunk, 5.1


 Here's my use case:
 I wanted to use param-sets to have {{facet.field=field1facet.field=field2}}
 For the same, here is what I updated:
 {code}
 curl http://localhost:8983/solr/bike/config/params -H 
 'Content-type:application/json' -d 
 '{
   set : { 
 facets : {
   facet.field:[start_station_name,end_station_name]
 }
   }
 }'
 {code}
 When I tried to use the same, I got a 500.
 After looking at the code, seems like, RequestParams uses MapSolrParams, 
 which banks on MapString,String map.
 This would need to change to support the multi-values.
 I also tried sending:
 {code}
 solr-5.0.0-SNAPSHOT  curl http://localhost:8983/solr/bike/config/params -H 
 'Content-type:application/json' -d '{update : { facets : 
 {facet.field:start_station_name,facet.field:end_station_name}}}'
 {code}
 This overwrote the value of facet.field with the last seen/parsed value i.e. 
 there was only one value in the end. This is expected as that's noggit's 
 behavior i.e.  doesn't complain and just overwrites the previous value with 
 the same key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.0-Linux (64bit/ibm-j9-jdk7) - Build # 72 - Failure!

2015-01-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.0-Linux/72/
Java: 64bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

1 tests failed.
FAILED:  org.apache.solr.cloud.SaslZkACLProviderTest.testSaslZkACLProvider

Error Message:
Could not get the port for ZooKeeper server

Stack Trace:
java.lang.RuntimeException: Could not get the port for ZooKeeper server
at org.apache.solr.cloud.ZkTestServer.run(ZkTestServer.java:482)
at 
org.apache.solr.cloud.SaslZkACLProviderTest$SaslZkTestServer.run(SaslZkACLProviderTest.java:206)
at 
org.apache.solr.cloud.SaslZkACLProviderTest.setUp(SaslZkACLProviderTest.java:74)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:861)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:853)




Build Log:
[...truncated 10324 lines...]
   [junit4] Suite: org.apache.solr.cloud.SaslZkACLProviderTest
   [junit4]   2 Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.0-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.SaslZkACLProviderTest
 

[jira] [Commented] (LUCENE-6213) Add test for IndexFormatTooOldException if a commit has a 3.x segment

2015-01-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14299376#comment-14299376
 ] 

Robert Muir commented on LUCENE-6213:
-

One way to fix this would be, to have an TooOldCodec that throws 
IndexFormatTooOldException for every method. We could register it in SPI with 
the names of codecs we no longer support. 

So in trunk, it would be registered for all the 4.x codecs for example.

When SegmentInfos asks the codec for the segmentInfoWriter() when decoding the 
commit, the user will get the correct exception. Alternatively we could just 
have a hardcoded list/map and conditional logic in SegmentInfos for this.

 Add test for IndexFormatTooOldException if a commit has a 3.x segment
 -

 Key: LUCENE-6213
 URL: https://issues.apache.org/jira/browse/LUCENE-6213
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir

 We should add a 4.x index (4.x commit) with some 3.x segment(s) to our 
 backwards tests.
 I don't think we throw IndexFormatTooOldException correctly in this case. I 
 think instead the user will get a confusing SPI error about a missing codec 
 Lucene3x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6212) Remove IndexWriter's per-document analyzer add/updateDocument APIs

2015-01-30 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6212:
---
Attachment: LUCENE-6212.patch

Patch, I think it's ready.

 Remove IndexWriter's per-document analyzer add/updateDocument APIs
 --

 Key: LUCENE-6212
 URL: https://issues.apache.org/jira/browse/LUCENE-6212
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6212.patch


 IndexWriter already takes an analyzer up-front (via
 IndexWriterConfig), but it also allows you to specify a different one
 for each add/updateDocument.
 I think this is quite dangerous/trappy since it means you can easily
 index tokens for that document that don't match at search-time based
 on the search-time analyzer.
 I think we should remove this trap in 5.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6212) Remove IndexWriter's per-document analyzer add/updateDocument APIs

2015-01-30 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298686#comment-14298686
 ] 

Uwe Schindler commented on LUCENE-6212:
---

+1 to get this in 5.0

 Remove IndexWriter's per-document analyzer add/updateDocument APIs
 --

 Key: LUCENE-6212
 URL: https://issues.apache.org/jira/browse/LUCENE-6212
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-6212.patch


 IndexWriter already takes an analyzer up-front (via
 IndexWriterConfig), but it also allows you to specify a different one
 for each add/updateDocument.
 I think this is quite dangerous/trappy since it means you can easily
 index tokens for that document that don't match at search-time based
 on the search-time analyzer.
 I think we should remove this trap in 5.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6212) Remove IndexWriter's per-document analyzer add/updateDocument APIs

2015-01-30 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-6212:
--

 Summary: Remove IndexWriter's per-document analyzer 
add/updateDocument APIs
 Key: LUCENE-6212
 URL: https://issues.apache.org/jira/browse/LUCENE-6212
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, Trunk


IndexWriter already takes an analyzer up-front (via
IndexWriterConfig), but it also allows you to specify a different one
for each add/updateDocument.

I think this is quite dangerous/trappy since it means you can easily
index tokens for that document that don't match at search-time based
on the search-time analyzer.

I think we should remove this trap in 5.0.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6209) IndexWriter should confess when it stalls flushes

2015-01-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298698#comment-14298698
 ] 

ASF subversion and git services commented on LUCENE-6209:
-

Commit 1656029 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1656029 ]

LUCENE-6209: IndexWriter now logs (to infoStream) how much time flushing 
threads were stalled because of  2X IW's RAM buffer in flush backlog

 IndexWriter should confess when it stalls flushes
 -

 Key: LUCENE-6209
 URL: https://issues.apache.org/jira/browse/LUCENE-6209
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6209.patch


 You tell IW how much RAM it's allowed to use to hold recently indexed 
 documents before they must be written to disk.
 IW is willing to use up to 2X that amount for in-progress flushes.
 If the in-progress flushes go over that limit, then IW will stall them, 
 hijacking indexing threads and having them wait until the in-progress flushes 
 are below 2X indexing buffer size again.
 This is important back-pressure e.g. if you are indexing on a machine with 
 many cores but slowish IO.
 Often when I profile an indexing heavy use case, even on fast IO (SSD) boxes, 
 I see the methods associated with this back-pressure taking unexpected time 
 ... yet IW never logs when it stalls/unstalls flushing.  I think it should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4524) Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum

2015-01-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298699#comment-14298699
 ] 

David Smiley commented on LUCENE-4524:
--

FWIW The main thing I'm looking for is a way to access the current position of 
a scorer, and then the ability to advance the scorer tree to the next position. 
 With this, an accurate highlighter (what Robert calls a query debugger) can 
be built.  You've made references to having a highlighter using this code... is 
this true?  Can you share more about what it's features are or at least point 
me at it?

 Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum
 -

 Key: LUCENE-4524
 URL: https://issues.apache.org/jira/browse/LUCENE-4524
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/codecs, core/index, core/search
Affects Versions: 4.0
Reporter: Simon Willnauer
 Fix For: 4.9, Trunk

 Attachments: LUCENE-4524.patch, LUCENE-4524.patch, LUCENE-4524.patch, 
 LUCENE-4524.patch, LUCENE-4524.patch, LUCENE-4524.patch


 spinnoff from http://www.gossamer-threads.com/lists/lucene/java-dev/172261
 {noformat}
 hey folks, 
 I have spend a hell lot of time on the positions branch to make 
 positions and offsets working on all queries if needed. The one thing 
 that bugged me the most is the distinction between DocsEnum and 
 DocsAndPositionsEnum. Really when you look at it closer DocsEnum is a 
 DocsAndFreqsEnum and if we omit Freqs we should return a DocIdSetIter. 
 Same is true for 
 DocsAndPostionsAndPayloadsAndOffsets*YourFancyFeatureHere*Enum. I 
 don't really see the benefits from this. We should rather make the 
 interface simple and call it something like PostingsEnum where you 
 have to specify flags on the TermsIterator and if we can't provide the 
 sufficient enum we throw an exception? 
 I just want to bring up the idea here since it might simplify a lot 
 for users as well for us when improving our positions / offset etc. 
 support. 
 thoughts? Ideas? 
 simon 
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6209) IndexWriter should confess when it stalls flushes

2015-01-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298701#comment-14298701
 ] 

ASF subversion and git services commented on LUCENE-6209:
-

Commit 1656031 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1656031 ]

LUCENE-6209: IndexWriter now logs (to infoStream) how much time flushing 
threads were stalled because of  2X IW's RAM buffer in flush backlog

 IndexWriter should confess when it stalls flushes
 -

 Key: LUCENE-6209
 URL: https://issues.apache.org/jira/browse/LUCENE-6209
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6209.patch


 You tell IW how much RAM it's allowed to use to hold recently indexed 
 documents before they must be written to disk.
 IW is willing to use up to 2X that amount for in-progress flushes.
 If the in-progress flushes go over that limit, then IW will stall them, 
 hijacking indexing threads and having them wait until the in-progress flushes 
 are below 2X indexing buffer size again.
 This is important back-pressure e.g. if you are indexing on a machine with 
 many cores but slowish IO.
 Often when I profile an indexing heavy use case, even on fast IO (SSD) boxes, 
 I see the methods associated with this back-pressure taking unexpected time 
 ... yet IW never logs when it stalls/unstalls flushing.  I think it should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4586) Eliminate the maxBooleanClauses limit

2015-01-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298703#comment-14298703
 ] 

Shawn Heisey commented on SOLR-4586:


bq. Instead of eliminating the limit on the Solr side, which Rob insists on 
wielding is veto power to block, I propose here a low value

I like this idea, despite the pain I *know* it will cause.  Most people who 
have existing configs based on the example will already have a config that sets 
the value to 1024, so the pain will be mostly felt by new users ... but it will 
be felt early enough that they will probably have it fixed before they deploy 
to production.

bq. My technical veto still stands as a member of the PMC.  It does not matter 
who i work for.

I'm aware of the rights that Apache gives you, but just because you have the 
power doesn't mean you must use it.



If eliminating the limit isn't going to happen, then I have the following 
proposal, in addition to lowering the default to 64.  I think I built this idea 
into some of the patch work I did for this issue, but I can no longer remember 
for sure:  I propose that we include code so that the *highest* 
maxBooleanClauses value seen during core loading become the global value, 
preventing a lower value seen later during the load process from overriding it.


 Eliminate the maxBooleanClauses limit
 -

 Key: SOLR-4586
 URL: https://issues.apache.org/jira/browse/SOLR-4586
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.2
 Environment: 4.3-SNAPSHOT 1456767M - ncindex - 2013-03-15 13:11:50
Reporter: Shawn Heisey
 Attachments: SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586_verify_maxClauses.patch


 In the #solr IRC channel, I mentioned the maxBooleanClauses limitation to 
 someone asking a question about queries.  Mark Miller told me that 
 maxBooleanClauses no longer applies, that the limitation was removed from 
 Lucene sometime in the 3.x series.  The config still shows up in the example 
 even in the just-released 4.2.
 Checking through the source code, I found that the config option is parsed 
 and the value stored in objects, but does not actually seem to be used by 
 anything.  I removed every trace of it that I could find, and all tests still 
 pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6209) IndexWriter should confess when it stalls flushes

2015-01-30 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6209.

Resolution: Fixed

 IndexWriter should confess when it stalls flushes
 -

 Key: LUCENE-6209
 URL: https://issues.apache.org/jira/browse/LUCENE-6209
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6209.patch


 You tell IW how much RAM it's allowed to use to hold recently indexed 
 documents before they must be written to disk.
 IW is willing to use up to 2X that amount for in-progress flushes.
 If the in-progress flushes go over that limit, then IW will stall them, 
 hijacking indexing threads and having them wait until the in-progress flushes 
 are below 2X indexing buffer size again.
 This is important back-pressure e.g. if you are indexing on a machine with 
 many cores but slowish IO.
 Often when I profile an indexing heavy use case, even on fast IO (SSD) boxes, 
 I see the methods associated with this back-pressure taking unexpected time 
 ... yet IW never logs when it stalls/unstalls flushing.  I think it should.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4586) Eliminate the maxBooleanClauses limit

2015-01-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298708#comment-14298708
 ] 

David Smiley commented on SOLR-4586:


bq. If eliminating the limit isn't going to happen, then I have the following 
proposal, in addition to lowering the default to 64. I think I built this idea 
into some of the patch work I did for this issue, but I can no longer remember 
for sure: I propose that we include code so that the highest maxBooleanClauses 
value seen during core loading become the global value, preventing a lower 
value seen later during the load process from overriding it.

+1 !

And the low value is I think a reasonable value.  If you hit this, it is likely 
you should be using \{!terms}; and the docs near the config value should say 
this. If a user query hits this... well 64 is plenty for what most apps might 
reasonable expect of a user (but not all apps, I realize).

It would be neat to modify the query parser to automatically introduce Terms 
filter in place of a BooleanQuery that is getting too big, so long as the 
clauses are all OR clauses.  That would be a separate issue though.

 Eliminate the maxBooleanClauses limit
 -

 Key: SOLR-4586
 URL: https://issues.apache.org/jira/browse/SOLR-4586
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.2
 Environment: 4.3-SNAPSHOT 1456767M - ncindex - 2013-03-15 13:11:50
Reporter: Shawn Heisey
 Attachments: SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586_verify_maxClauses.patch


 In the #solr IRC channel, I mentioned the maxBooleanClauses limitation to 
 someone asking a question about queries.  Mark Miller told me that 
 maxBooleanClauses no longer applies, that the limitation was removed from 
 Lucene sometime in the 3.x series.  The config still shows up in the example 
 even in the just-released 4.2.
 Checking through the source code, I found that the config option is parsed 
 and the value stored in objects, but does not actually seem to be used by 
 anything.  I removed every trace of it that I could find, and all tests still 
 pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4586) Eliminate the maxBooleanClauses limit

2015-01-30 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298731#comment-14298731
 ] 

Mark Miller commented on SOLR-4586:
---

Technical vetoes don't just stand like a guardian in the road forever. Once 
someone has withdrawn from helping address the issue or working with a group of 
people still working down the issue, if they insist on holding a veto, it 
starts to move to capricious. We can deal with it at the PMC or board level if 
we have to, but it's not how veto power works. 

 Eliminate the maxBooleanClauses limit
 -

 Key: SOLR-4586
 URL: https://issues.apache.org/jira/browse/SOLR-4586
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.2
 Environment: 4.3-SNAPSHOT 1456767M - ncindex - 2013-03-15 13:11:50
Reporter: Shawn Heisey
 Attachments: SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586.patch, SOLR-4586.patch, SOLR-4586.patch, 
 SOLR-4586_verify_maxClauses.patch


 In the #solr IRC channel, I mentioned the maxBooleanClauses limitation to 
 someone asking a question about queries.  Mark Miller told me that 
 maxBooleanClauses no longer applies, that the limitation was removed from 
 Lucene sometime in the 3.x series.  The config still shows up in the example 
 even in the just-released 4.2.
 Checking through the source code, I found that the config option is parsed 
 and the value stored in objects, but does not actually seem to be used by 
 anything.  I removed every trace of it that I could find, and all tests still 
 pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5743) Faceting with BlockJoin support

2015-01-30 Thread Dr Oleg Savrasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated SOLR-5743:
---
Attachment: SOLR-5743.patch

 Faceting with BlockJoin support
 ---

 Key: SOLR-5743
 URL: https://issues.apache.org/jira/browse/SOLR-5743
 Project: Solr
  Issue Type: New Feature
Reporter: abipc
  Labels: features
 Attachments: SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch


 For a sample inventory(note - nested documents) like this -   
  doc
 field name=id10/field
 field name=type_sparent/field
 field name=BRAND_sNike/field
 doc
 field name=id11/field
 field name=COLOR_sRed/field
 field name=SIZE_sXL/field
 /doc
 doc
 field name=id12/field
 field name=COLOR_sBlue/field
 field name=SIZE_sXL/field
 /doc
 /doc
 Faceting results must contain - 
 Red(1)
 XL(1) 
 Blue(1) 
 for a q=* query. 
 PS : The inventory example has been taken from this blog - 
 http://blog.griddynamics.com/2013/09/solr-block-join-support.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5743) Faceting with BlockJoin support

2015-01-30 Thread Dr Oleg Savrasov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298423#comment-14298423
 ] 

Dr Oleg Savrasov commented on SOLR-5743:


In order to utilize proposed component, you need to configure it in 
solrconfig.xml and introduce some search handler which uses it, for example

  searchComponent name=blockJoinFacet 
class=org.apache.solr.handler.component.BlockJoinFacetComponent

  /searchComponent

  requestHandler name=/blockJoinFacetRH 
class=org.apache.solr.handler.component.SearchHandler
arr name=last-components
  strblockJoinFacet/str
/arr
  /requestHandler

Please notice that only string docValues fields could be used for faceting, int 
type can be covered later, so you need to update appropriate fields 
configuration in schema.xml file, for example

 field name=COLOR_s type=string indexed=true stored=true 
docValues=true/
 field name=SIZE_s type=string indexed=true stored=true 
docValues=true/


Then after indexing some set of hierarchical documents like

 doc
field name=id10/field
field name=type_sparent/field
field name=BRAND_sNike/field
doc
  field name=id11/field
  field name=type_schild/field
  field name=COLOR_sRed/field
  field name=SIZE_sXL/field
/doc
doc
  field name=id12/field
  field name=type_schild/field
  field name=COLOR_sBlue/field
  field name=SIZE_sXL/field
/doc
 /doc

you need to pass required ToParentBlockJoinQuery to the configured request 
handler, for example

 
http://localhost:8983/solr/collection1/blockJoinFacetRH?q={!parent+which%3D%22type_s%3Aparent%22}type_s%3Achildwt=jsonindent=truefacet=truechild.facet.field=COLOR_schild.facet.field=SIZE_s

and it yields you the desired result

 {
  responseHeader:{
status:0,
QTime:1},
  response:{numFound:1,start:0,docs:[
  {
id:10,
type_s:parent,
BRAND_s:Nike,
_version_:1491642108914696192}]
  },
  facet_counts:{
facet_queries:{},
facet_fields:{},
facet_dates:{},
facet_ranges:{},
facet_intervals:{},
facet_fields:[
  COLOR_s,[
Blue,1,
Red,1],
  SIZE_s,[
XL,1]]}}

Please take the latest patch, it contains fix related to just found caching 
issue.

 Faceting with BlockJoin support
 ---

 Key: SOLR-5743
 URL: https://issues.apache.org/jira/browse/SOLR-5743
 Project: Solr
  Issue Type: New Feature
Reporter: abipc
  Labels: features
 Attachments: SOLR-5743.patch, SOLR-5743.patch, SOLR-5743.patch


 For a sample inventory(note - nested documents) like this -   
  doc
 field name=id10/field
 field name=type_sparent/field
 field name=BRAND_sNike/field
 doc
 field name=id11/field
 field name=COLOR_sRed/field
 field name=SIZE_sXL/field
 /doc
 doc
 field name=id12/field
 field name=COLOR_sBlue/field
 field name=SIZE_sXL/field
 /doc
 /doc
 Faceting results must contain - 
 Red(1)
 XL(1) 
 Blue(1) 
 for a q=* query. 
 PS : The inventory example has been taken from this blog - 
 http://blog.griddynamics.com/2013/09/solr-block-join-support.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Our Optimize Suggestions on lucene 3.5

2015-01-30 Thread Uwe Schindler
Sorry Robert – you’re right,

 

I had the impression that we changed that already. In fact, the WeakHashMap is 
needed, because multiple readers (especially Slow ones) can share the same 
uninverted fields. In the ideal world, we should change the whole stuff and 
remove FieldCacheImpl completely and let the field maps stay directly on the 
UninvertingReader as regular member fields. The only problem with this is: if 
you have multiple UninvertigReaders, all of them have separate uninverted 
instances. But this is a bug already if you do this.

 

Uwe

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de/ http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Robert Muir [mailto:rcm...@gmail.com] 
Sent: Friday, January 30, 2015 5:04 AM
To: dev@lucene.apache.org
Subject: Re: Our Optimize Suggestions on lucene 3.5

 

I am not sure this is the case. Actually, FieldCacheImpl still works as before 
and has a weak hashmap still.

However, i think the weak map is unnecessary. reader close listeners already 
ensure purging from the map, so I don't think the weak map serves any purpose 
today. The only possible advantage it has is to allow you to GC fieldcaches 
when you are already leaking readers... it could just be a regular map IMO.

 

On Thu, Jan 29, 2015 at 9:35 AM, Uwe Schindler u...@thetaphi.de wrote:

Hi,

parts of your suggestions are already done in Lucene 4+. For one part I can 
tell you:


weakhashmap,hashmap , synchronized problem


1. FieldCacheImpl use weakhashmap to manage field value cache,it has memory 
leak BUG.

2. sorlInputDocunent use a lot of hashmap,linkhashmap for field,that weast a 
lot of memory

3. AttributeSource use weakhashmap to cache class impl,and use a global 
synchronized reduce performance

4. AttributeSource is a base class , NumbericField extends AttributeSource,but 
they create a lot of hashmap,but NumbericField never use it .

5. all of this ,JVM GC take a lot of burder for the never used hashmap.

All Lucene items no longer apply:

1.   FieldCache is gone and is no longer supported in Lucene 5. You should 
use the new DocValues index format for that (column based storage, optimized 
for sorting, numeric). You can still use Lucene’s UninvertingReader, bus this 
one has no weak maps anymore because it is no cache.

2.   No idea about that one - its unrelated to Lucene

3.   AttributeSource no longer uses this, since Lucene 4.8 it uses Java 7’s 
java.lang.ClassValue to attach the implementation class to the interface. No 
concurrency problems anymore. It also uses MethodHandles to invoke the 
attribute classes.

4.   NumericField no longer exists, the base class does not use 
AttributeSource. All field instances now automatically reuse the inner 
TokenStream instances across fields, too!

5.   See above

In addition, Lucene has much better memory use, because terms are no longer 
UTF-16 strings and are in large shared byte arrays. So a lot of those other 
“optimizations” are handled in a different way in Lucene 4 and Lucene 5 (coming 
out the next few days).

Uwe

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de http://www.thetaphi.de/ 

eMail: u...@thetaphi.de

 

From: yannianmu(母延年) [mailto:yannia...@tencent.com] 
Sent: Thursday, January 29, 2015 12:59 PM
To: general; dev; commits
Subject: Our Optimize Suggestions on lucene 3.5

 

 

 

Dear Lucene dev

We are from the the Hermes team. Hermes is a project base on lucene 3.5 and 
solr 3.5.

Hermes process 100 billions documents per day,2000 billions document for total 
days (two month). Nowadays our single cluster index size is over then 
200Tb,total size is 600T. We use lucene for the big data warehouse  speed up 
.reduce the analysis response time, for example filter like this age=32 and 
keywords like 'lucene'  or do some thing like count ,sum,order by group by and 
so on.

 

Hermes could filter a data form 1000billions in 1 secondes.10billions 
data`s order by taken 10s,10billions data`s group by thaken 15 s,10 billions 
days`s sum,avg,max,min stat taken 30 s

For those purpose,We made lots of improve base on lucene and solr , nowadays 
lucene has change so much since version 4.10, the coding has change so much.so 
we don`t want to commit our code to lucene .only to introduce our imporve base 
on luene 3.5,and introduce how hermes can process 100billions documents per day 
on 32 Physical Machines.we think it may be helpfull for some people who have 
the similary sense .

 

 


 


First level index(tii),Loading by Demand


Original:

1. .tii file is load to ram by TermInfosReaderIndex

2. that may quite slowly by first open Index

3. the index need open by Persistence,once open it ,nevel close it.

4. this cause will limit the number of the index.when we have thouthand of 
index,that will Impossible.

Our improve:

1. Loading by Demand,not all fields need to load into memory 

2. we modify the method 

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_31) - Build # 4449 - Still Failing!

2015-01-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4449/
Java: 32bit/jdk1.8.0_31 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:58887/repfacttest_c8n_1x3_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:58887/repfacttest_c8n_1x3_shard1_replica1
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:260)
at 
org.apache.solr.cloud.ReplicationFactorTest.test(ReplicationFactorTest.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-4446) exception swallowed, NPE created upon trouble getting JNDI connection

2015-01-30 Thread Michele Di Noia (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298428#comment-14298428
 ] 

Michele Di Noia commented on SOLR-4446:
---

Please, I didn't catch why no one is moving on this important issue.
How  are u using DIH in production? without prepared stamentent? If so, why 
yours DBAs don't claim?

May be  there is a different basic solution that I cannot see. Please anyone 
aware of it, advice me. 

Regards to all.
Michele

 exception swallowed, NPE created upon trouble getting JNDI connection
 -

 Key: SOLR-4446
 URL: https://issues.apache.org/jira/browse/SOLR-4446
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.1
Reporter: Ken Geis

 I am trying to use a JNDI connection, but an error occurs getting the 
 connection. The error is swallowed and obscured by a NullPointerException. 
 See comments inline below.
 {code:title=JdbcDataSource.java}
   protected CallableConnection createConnectionFactory(final Context 
 context,
final Properties initProps) {
 ...
 final String jndiName = initProps.getProperty(JNDI_NAME);
 final String url = initProps.getProperty(URL); /* is null */
 final String driver = initProps.getProperty(DRIVER); /* is null */
 ...
 return factory = new CallableConnection() {
   @Override
   public Connection call() throws Exception {
 ...
 try {
   if(url != null){
 c = DriverManager.getConnection(url, initProps);
   } else if(jndiName != null){
 ...
 /* error occurs */
 ...
   }
 } catch (SQLException e) {
 /* exception handler assumes that try block followed url != null path; in 
 the JNDI case, driver is null, and DocBuilder.loadClass(..) throws a NPE */
   Driver d = (Driver) DocBuilder.loadClass(driver, 
 context.getSolrCore()).newInstance();
   c = d.connect(url, initProps);
 }
 ...
   }
 };
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Using RAT

2015-01-30 Thread Robert Muir
Yeah, they advertise RAT as heuristical, and we are abusing it to
detect missing copyright headers for the most part (typically, someone
forgets the apache license one). So from time to time we will have to
update the heuristics for licenses that it cannot recognize.

On Fri, Jan 30, 2015 at 3:25 AM, Upayavira u...@odoko.co.uk wrote:
 Thanks Robert,

 I found it confusing because the config for Solr was inside Lucene, not
 where I would have expected it, but glad to know I was in the right
 place.

 What you said was exactly what I needed. The Rat output shows some JS
 libraries as not passing (e.g. angularJS) so I just need to add their
 copyright strings to that file, as you say in the right section, and
 then rerun 'ant rat-sources'.

 Thanks!

 Upayavira

 On Fri, Jan 30, 2015, at 02:37 AM, Robert Muir wrote:
 You are correct the main logic is in lucene/common-build, I dont see
 why you think that is confusing. Do you think the license verification
 logic shoudl be duplicated twice in the build system? anyway there are
 some hooks like $rat.excludes for a module to override with.

 Can you explain a little more what the problem is? (maybe just include
 the output from 'ant rat-sources' that fails)

 If the problem is that its a test file or something that should not
 relaly have a license (e.g. its something that an analyzer croaked on
 before), then that rat.excludes is the way to go, just exclude the
 file completely.

 otherwise if the license is not recognized, add a substring that would
 identify it to that logic, under the correct category.

 On Thu, Jan 29, 2015 at 8:58 PM, Upayavira u...@odoko.co.uk wrote:
  As a part of SOLR5507, I'm trying to get RAT to pass. I've dug around,
  and got very confused. It seems that the config for Solr RAT is in
  lucene/common-build.xml, which doesn't make much sense, and there are
  licenses I can't find in there.
 
  Can someone explain to me how I tell RAT that a specific license is
  acceptable?
 
  I suspect I'm missing something obvious.
 
  Many thanks!
 
  Upayavira
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Using RAT

2015-01-30 Thread Upayavira
Thanks Robert,

I found it confusing because the config for Solr was inside Lucene, not
where I would have expected it, but glad to know I was in the right
place.

What you said was exactly what I needed. The Rat output shows some JS
libraries as not passing (e.g. angularJS) so I just need to add their
copyright strings to that file, as you say in the right section, and
then rerun 'ant rat-sources'.

Thanks!

Upayavira

On Fri, Jan 30, 2015, at 02:37 AM, Robert Muir wrote:
 You are correct the main logic is in lucene/common-build, I dont see
 why you think that is confusing. Do you think the license verification
 logic shoudl be duplicated twice in the build system? anyway there are
 some hooks like $rat.excludes for a module to override with.
 
 Can you explain a little more what the problem is? (maybe just include
 the output from 'ant rat-sources' that fails)
 
 If the problem is that its a test file or something that should not
 relaly have a license (e.g. its something that an analyzer croaked on
 before), then that rat.excludes is the way to go, just exclude the
 file completely.
 
 otherwise if the license is not recognized, add a substring that would
 identify it to that logic, under the correct category.
 
 On Thu, Jan 29, 2015 at 8:58 PM, Upayavira u...@odoko.co.uk wrote:
  As a part of SOLR5507, I'm trying to get RAT to pass. I've dug around,
  and got very confused. It seems that the config for Solr RAT is in
  lucene/common-build.xml, which doesn't make much sense, and there are
  licenses I can't find in there.
 
  Can someone explain to me how I tell RAT that a specific license is
  acceptable?
 
  I suspect I'm missing something obvious.
 
  Many thanks!
 
  Upayavira
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Our Optimize Suggestions on lucene 3.5

2015-01-30 Thread Robert Muir
I think this is all fine. Because things are keyed on core-reader and
there are already core listeners installed to purge when the ref count
for a core drops to zero.

honestly if you change the map to a regular one, all tests pass.

On Fri, Jan 30, 2015 at 5:37 AM, Uwe Schindler u...@thetaphi.de wrote:
 Sorry Robert – you’re right,



 I had the impression that we changed that already. In fact, the WeakHashMap
 is needed, because multiple readers (especially Slow ones) can share the
 same uninverted fields. In the ideal world, we should change the whole stuff
 and remove FieldCacheImpl completely and let the field maps stay directly on
 the UninvertingReader as regular member fields. The only problem with this
 is: if you have multiple UninvertigReaders, all of them have separate
 uninverted instances. But this is a bug already if you do this.



 Uwe



 -

 Uwe Schindler

 H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de

 eMail: u...@thetaphi.de



 From: Robert Muir [mailto:rcm...@gmail.com]
 Sent: Friday, January 30, 2015 5:04 AM
 To: dev@lucene.apache.org
 Subject: Re: Our Optimize Suggestions on lucene 3.5



 I am not sure this is the case. Actually, FieldCacheImpl still works as
 before and has a weak hashmap still.

 However, i think the weak map is unnecessary. reader close listeners already
 ensure purging from the map, so I don't think the weak map serves any
 purpose today. The only possible advantage it has is to allow you to GC
 fieldcaches when you are already leaking readers... it could just be a
 regular map IMO.



 On Thu, Jan 29, 2015 at 9:35 AM, Uwe Schindler u...@thetaphi.de wrote:

 Hi,

 parts of your suggestions are already done in Lucene 4+. For one part I can
 tell you:

 weakhashmap,hashmap , synchronized problem

 1. FieldCacheImpl use weakhashmap to manage field value cache,it has memory
 leak BUG.

 2. sorlInputDocunent use a lot of hashmap,linkhashmap for field,that weast a
 lot of memory

 3. AttributeSource use weakhashmap to cache class impl,and use a global
 synchronized reduce performance

 4. AttributeSource is a base class , NumbericField extends
 AttributeSource,but they create a lot of hashmap,but NumbericField never use
 it .

 5. all of this ,JVM GC take a lot of burder for the never used hashmap.

 All Lucene items no longer apply:

 1.   FieldCache is gone and is no longer supported in Lucene 5. You
 should use the new DocValues index format for that (column based storage,
 optimized for sorting, numeric). You can still use Lucene’s
 UninvertingReader, bus this one has no weak maps anymore because it is no
 cache.

 2.   No idea about that one - its unrelated to Lucene

 3.   AttributeSource no longer uses this, since Lucene 4.8 it uses Java
 7’s java.lang.ClassValue to attach the implementation class to the
 interface. No concurrency problems anymore. It also uses MethodHandles to
 invoke the attribute classes.

 4.   NumericField no longer exists, the base class does not use
 AttributeSource. All field instances now automatically reuse the inner
 TokenStream instances across fields, too!

 5.   See above

 In addition, Lucene has much better memory use, because terms are no longer
 UTF-16 strings and are in large shared byte arrays. So a lot of those other
 “optimizations” are handled in a different way in Lucene 4 and Lucene 5
 (coming out the next few days).

 Uwe

 -

 Uwe Schindler

 H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de

 eMail: u...@thetaphi.de



 From: yannianmu(母延年) [mailto:yannia...@tencent.com]
 Sent: Thursday, January 29, 2015 12:59 PM
 To: general; dev; commits
 Subject: Our Optimize Suggestions on lucene 3.5







 Dear Lucene dev

 We are from the the Hermes team. Hermes is a project base on lucene 3.5
 and solr 3.5.

 Hermes process 100 billions documents per day,2000 billions document for
 total days (two month). Nowadays our single cluster index size is over then
 200Tb,total size is 600T. We use lucene for the big data warehouse  speed up
 .reduce the analysis response time, for example filter like this age=32 and
 keywords like 'lucene'  or do some thing like count ,sum,order by group by
 and so on.



 Hermes could filter a data form 1000billions in 1 secondes.10billions
 data`s order by taken 10s,10billions data`s group by thaken 15 s,10 billions
 days`s sum,avg,max,min stat taken 30 s

 For those purpose,We made lots of improve base on lucene and solr , nowadays
 lucene has change so much since version 4.10, the coding has change so
 much.so we don`t want to commit our code to lucene .only to introduce our
 imporve base on luene 3.5,and introduce how hermes can process 100billions
 documents per day on 32 Physical Machines.we think it may be helpfull for
 some people who have the similary sense .







 First level index(tii),Loading by Demand

 Original:

 1. .tii file is load to ram by TermInfosReaderIndex

 2. that may quite slowly by first open Index

 3. 

[jira] [Commented] (SOLR-4509) Disable HttpClient stale check for performance.

2015-01-30 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298543#comment-14298543
 ] 

Alan Woodward commented on SOLR-4509:
-

I'm seeing lots of test fails on the mailing list due to 
NoHttpResponseExceptions, which this StackOverflow answer suggests is due to 
stale connections: 
http://stackoverflow.com/questions/10558791/apache-httpclient-interim-error-nohttpresponseexception.

Following the links to 
http://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html, the 
section on connection evictions policy says that you should have a separate 
thread that periodically closes idle or dead connections.  Is this something we 
should look into for HttpSolrClient?

 Disable HttpClient stale check for performance.
 ---

 Key: SOLR-4509
 URL: https://issues.apache.org/jira/browse/SOLR-4509
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: 5 node SmartOS cluster (all nodes living in same global 
 zone - i.e. same physical machine)
Reporter: Ryan Zezeski
Assignee: Mark Miller
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: IsStaleTime.java, SOLR-4509-4_4_0.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, SOLR-4509.patch, 
 baremetal-stale-nostale-med-latency.dat, 
 baremetal-stale-nostale-med-latency.svg, 
 baremetal-stale-nostale-throughput.dat, baremetal-stale-nostale-throughput.svg


 By disabling the Apache HTTP Client stale check I've witnessed a 2-4x 
 increase in throughput and reduction of over 100ms.  This patch was made in 
 the context of a project I'm leading, called Yokozuna, which relies on 
 distributed search.
 Here's the patch on Yokozuna: https://github.com/rzezeski/yokozuna/pull/26
 Here's a write-up I did on my findings: 
 http://www.zinascii.com/2013/solr-distributed-search-and-the-stale-check.html
 I'm happy to answer any questions or make changes to the patch to make it 
 acceptable.
 ReviewBoard: https://reviews.apache.org/r/28393/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_31) - Build # 4345 - Still Failing!

2015-01-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4345/
Java: 64bit/jdk1.8.0_31 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=5472, name=collection0, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=5472, name=collection0, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:52999/n/q: Could not find collection : 
awholynewstresscollection_collection0_0
at __randomizedtesting.SeedInfo.seed([27507826D49ECDD4]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:558)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:370)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1009)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:886)


FAILED:  org.apache.solr.cloud.ReplicationFactorTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: 
http://127.0.0.1:58497/e/repfacttest_c8n_1x3_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: 
http://127.0.0.1:58497/e/repfacttest_c8n_1x3_shard1_replica1
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:787)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:730)
at 
org.apache.solr.cloud.ReplicationFactorTest.testRf3(ReplicationFactorTest.java:260)
at 
org.apache.solr.cloud.ReplicationFactorTest.test(ReplicationFactorTest.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:940)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:915)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)

[jira] [Resolved] (SOLR-7060) NoSuchMethodError - org/apache/lucene/util/AttributeImpl

2015-01-30 Thread Rene Loitzenbauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rene Loitzenbauer resolved SOLR-7060.
-
Resolution: Invalid

Thanks for your reply.

I could narrow the problem down to the yourkit profiler. If the yourkit 
profiler is configured like -agentpath:/opt/yourkitagent/libyjpagent.so then 
the error occures. When i remove the yourkit configuration the error does not 
occur.

 NoSuchMethodError - org/apache/lucene/util/AttributeImpl
 

 Key: SOLR-7060
 URL: https://issues.apache.org/jira/browse/SOLR-7060
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.2, 4.10.3
 Environment: CentOS 6
 JDK 1.7.0_72
 Tomcat 7.0.56
Reporter: Rene Loitzenbauer

 When sending a document update to this solr instance the following error is 
 thrown. For example just updating a single value of an existing document.
 I cannot reproduce the same thing on another machine running on CentOS7 or 
 Windows Server 2008 or 2012.
 I can confirm that the exact same thing works on the same machine with Solr 
 Version 4.7.0
 I found the same Exception here: https://jira.duraspace.org/browse/DS-2293 
 but was closed as cannot reproduce.
 Any suggestions, how to resolve or analyze this?
 {code}
  org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
 java.lang.NoSuchMethodError: 
 java.lang.invoke.MethodHandle.invokeExact()Lorg/apache/lucene/util/AttributeImpl;
   at 
 org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:793)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:434)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
   at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
   at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
   at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
   at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
   at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
   at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
   at 
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
   at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
   at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
   at 
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1070)
   at 
 org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
   at 
 org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1736)
   at 
 org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1695)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at 
 org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.NoSuchMethodError: 
 java.lang.invoke.MethodHandle.invokeExact()Lorg/apache/lucene/util/AttributeImpl;
   at 
 org.apache.lucene.util.AttributeFactory$DefaultAttributeFactory.createAttributeInstance(AttributeFactory.java:68)
   at 
 org.apache.lucene.analysis.NumericTokenStream$NumericAttributeFactory.createAttributeInstance(NumericTokenStream.java:139)
   at 
 org.apache.lucene.util.AttributeSource.addAttribute(AttributeSource.java:222)
   at 
 org.apache.lucene.analysis.NumericTokenStream.init(NumericTokenStream.java:321)
   at 
 org.apache.lucene.analysis.NumericTokenStream.init(NumericTokenStream.java:232)
   at org.apache.lucene.document.Field.tokenStream(Field.java:512)
   at 
 org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:611)
   at 
 org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:359)
   at 
 org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:318)
   at 
 org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:239)
   at 
 org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:454)
   at 
 org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1511)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:240)
   at 
 org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:164)
  

[jira] [Commented] (LUCENE-6196) Include geo3d package, along with Lucene integration to make it useful

2015-01-30 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298542#comment-14298542
 ] 

Karl Wright commented on LUCENE-6196:
-

Found some problems with the Bounds computation.  I won't have time to look at 
these until the weekend though.


 Include geo3d package, along with Lucene integration to make it useful
 --

 Key: LUCENE-6196
 URL: https://issues.apache.org/jira/browse/LUCENE-6196
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/spatial
Reporter: Karl Wright
Assignee: David Smiley
 Attachments: ShapeImpl.java, geo3d-tests.zip, geo3d.zip


 I would like to explore contributing a geo3d package to Lucene.  This can be 
 used in conjunction with Lucene search, both for generating geohashes (via 
 spatial4j) for complex geographic shapes, as well as limiting results 
 resulting from those queries to those results within the exact shape in 
 highly performant ways.
 The package uses 3d planar geometry to do its magic, which basically limits 
 computation necessary to determine membership (once a shape has been 
 initialized, of course) to only multiplications and additions, which makes it 
 feasible to construct a performant BoostSource-based filter for geographic 
 shapes.  The math is somewhat more involved when generating geohashes, but is 
 still more than fast enough to do a good job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4524) Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum

2015-01-30 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-4524:
--
Attachment: LUCENE-4524.patch

This is a better patch, the old one still had some of the Weight API changes 
from LUCENE-2878 in it.

Scorer extends PostingsEnum directly at the moment, which means that there are 
lots of Scorer implementations that have to implement empty position, offset 
and payload methods.  Might be worth having it extend DocsEnum instead.

 Merge DocsEnum and DocsAndPositionsEnum into PostingsEnum
 -

 Key: LUCENE-4524
 URL: https://issues.apache.org/jira/browse/LUCENE-4524
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/codecs, core/index, core/search
Affects Versions: 4.0
Reporter: Simon Willnauer
 Fix For: 4.9, Trunk

 Attachments: LUCENE-4524.patch, LUCENE-4524.patch, LUCENE-4524.patch, 
 LUCENE-4524.patch, LUCENE-4524.patch, LUCENE-4524.patch


 spinnoff from http://www.gossamer-threads.com/lists/lucene/java-dev/172261
 {noformat}
 hey folks, 
 I have spend a hell lot of time on the positions branch to make 
 positions and offsets working on all queries if needed. The one thing 
 that bugged me the most is the distinction between DocsEnum and 
 DocsAndPositionsEnum. Really when you look at it closer DocsEnum is a 
 DocsAndFreqsEnum and if we omit Freqs we should return a DocIdSetIter. 
 Same is true for 
 DocsAndPostionsAndPayloadsAndOffsets*YourFancyFeatureHere*Enum. I 
 don't really see the benefits from this. We should rather make the 
 interface simple and call it something like PostingsEnum where you 
 have to specify flags on the TermsIterator and if we can't provide the 
 sufficient enum we throw an exception? 
 I just want to bring up the idea here since it might simplify a lot 
 for users as well for us when improving our positions / offset etc. 
 support. 
 thoughts? Ideas? 
 simon 
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Re: Our Optimize Suggestions on lucene 3.5

2015-01-30 Thread Uwe Schindler
Hi yannianmu,

What you propose here is a so-called „soft reference cache“. That’s something 
completely different. In fact FieldCache was never be a “cache” because it was 
never able to evict entries on memory pressure. The weak map has a different 
reason (please note, it is “weak keys” not “weak values” as you do in your 
implementation). Weak maps are not useful for caches at all. They are useful to 
decouple to object instances from each other.

The weak map in FieldCacheImpl had the reason to prevent memory leaks if you 
open multiple IndexReaders on the same segments. If the last one is closed and 
garbage collected, the corresponding uninverted field should disappear. And 
this works correctly. But this does not mean that uninverted fields are removed 
on memory pressure: once loaded the uninverted stuff keeps alive until all 
referring readers are closed – this is the idea behind the design, so there is 
no memory leak! If you want a cache that discards the cached entries on memory 
pressure, implement your own field”cache” (in fact a real “cache” like you did).

Uwe

P.S.: FieldCache was a bad name, because it was no “cache”. This is why it 
should be used as “UninvertingReader” now.

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 http://www.thetaphi.de/ http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: yannianmu(母延年) [mailto:yannia...@tencent.com] 
Sent: Friday, January 30, 2015 5:47 AM
To: Robert Muir; dev
Subject: Re: Re: Our Optimize Suggestions on lucene 3.5

 

WeakHashMap may be cause a  memory leak problem.

 

we use SoftReference instad of it like this;

 

 

  public static class SoftLinkMap{

  private static int SORT_CACHE_SIZE=1024;

private static float LOADFACTOR=0.75f;

final MapObject,SoftReferenceMapEntry,Object readerCache_lru=new 
LinkedHashMapObject,SoftReferenceMapEntry,Object((int) 
Math.ceil(SORT_CACHE_SIZE / LOADFACTOR) + 1, LOADFACTOR, true) {

@Override

protected boolean 
removeEldestEntry(Map.EntryObject,SoftReferenceMapEntry,Object eldest) {

  return size()  SORT_CACHE_SIZE;

}

  };

  

   public void remove(Object key)

   {

   readerCache_lru.remove(key);

   }

   

   public MapEntry,Object get(Object key)

   {

   SoftReferenceMapEntry,Object w =  readerCache_lru.get(key);

  if(w==null)

  {

  return null;

  }

return w.get();

   }

   

   

   public void put(Object key,MapEntry,Object value)

   {

   readerCache_lru.put(key, new SoftReferenceMapEntry,Object(value));

   }

   

   public Setjava.util.Map.EntryObject, MapEntry, Object entrySet()

   {

   HashMapObject,MapEntry,Object rtn=new HashMapObject, 
MapEntry,Object();

   for(java.util.Map.EntryObject,SoftReferenceMapEntry,Object 
e:readerCache_lru.entrySet())

   {

   MapEntry,Object v=e.getValue().get();

   if(v!=null)

   {

   rtn.put(e.getKey(), v);

   }

   }

   return rtn.entrySet();

   }

  }

 

  final SoftLinkMap readerCache=new SoftLinkMap();

//final MapObject,MapEntry,Object readerCache = new 
WeakHashMapObject,MapEntry,Object();



 

  _  

yannianmu(母延年)

 

From: Robert Muir mailto:rcm...@gmail.com 

Date: 2015-01-30 12:03

To: dev@lucene.apache.org

Subject: Re: Our Optimize Suggestions on lucene 3.5

I am not sure this is the case. Actually, FieldCacheImpl still works as before 
and has a weak hashmap still.

However, i think the weak map is unnecessary. reader close listeners already 
ensure purging from the map, so I don't think the weak map serves any purpose 
today. The only possible advantage it has is to allow you to GC fieldcaches 
when you are already leaking readers... it could just be a regular map IMO.

 

On Thu, Jan 29, 2015 at 9:35 AM, Uwe Schindler u...@thetaphi.de wrote:

Hi,

parts of your suggestions are already done in Lucene 4+. For one part I can 
tell you:


weakhashmap,hashmap , synchronized problem


1. FieldCacheImpl use weakhashmap to manage field value cache,it has memory 
leak BUG.

2. sorlInputDocunent use a lot of hashmap,linkhashmap for field,that weast a 
lot of memory

3. AttributeSource use weakhashmap to cache class impl,and use a global 
synchronized reduce performance

4. AttributeSource is a base class , NumbericField extends AttributeSource,but 
they create a lot of hashmap,but NumbericField never use it .

5. all of this ,JVM GC take a lot of burder for the never used hashmap.

All Lucene items no longer apply:

1.   FieldCache is gone and is no longer supported in Lucene 5. You should 
use the new DocValues index format for that (column based storage, optimized 
for sorting, numeric). You can still use Lucene?s UninvertingReader, bus this 
one has no weak maps anymore because it is no cache.

2.   No idea about that one - its unrelated to Lucene

3.   AttributeSource no longer uses this, since Lucene 4.8 it uses Java 7?s 
java.lang.ClassValue to attach the implementation class to the interface. No 
concurrency problems anymore. It