[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-31 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954969#comment-13954969
 ] 

Shalin Shekhar Mangar commented on SOLR-5931:
-

Are you sure this used to work ever? I mean is it a bug or an enhancement 
request?

 solrcore.properties is not reloaded when core is reloaded
 -

 Key: SOLR-5931
 URL: https://issues.apache.org/jira/browse/SOLR-5931
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.7
Reporter: Gunnlaugur Thor Briem
Assignee: Shalin Shekhar Mangar
Priority: Minor

 When I change solrcore.properties for a core, and then reload the core, the 
 previous values of the properties in that file are still in effect. If I 
 *unload* the core and then add it back, in the “Core Admin” section of the 
 admin UI, then the changes in solrcore.properties do take effect.
 My specific test case is a DataImportHandler where {{db-data-config.xml}} 
 uses a property to decide which DB host to talk to:
 {code:xml}
 dataSource driver=org.postgresql.Driver name=meta 
 url=jdbc:postgresql://${dbhost}/${solr.core.name} .../
 {code}
 When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
 the core, the next dataimport operation still connects to the previous DB 
 host. Reloading the dataimport config does not help. I have to unload the 
 core (or fully restart the whole Solr) for the properties change to take 
 effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.1 RC2

2014-03-31 Thread Shalin Shekhar Mangar
+1

SUCCESS! [1:14:57.063621]

On Sat, Mar 29, 2014 at 2:16 PM, Steve Rowe sar...@gmail.com wrote:
 Please vote for the second Release Candidate for Lucene/Solr 4.7.1.

 Download it here:
 https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/

 Smoke tester cmdline (from the lucene_solr_4_7 branch):

 python3.2 -u dev-tools/scripts/smokeTestRelease.py \
 https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/
  \
 1582953 4.7.1 /tmp/4.7.1-smoke

 The smoke tester passed for me: SUCCESS! [0:50:29.936732]

 My vote: +1

 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-31 Thread Gary Yue (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13954986#comment-13954986
 ] 

Gary Yue commented on SOLR-5931:


Hi,

I also notice the same problem. (I am upgrading from 3.5 to 4.7)
I used to be able to define master.url in solrcore.properties, and any changes 
there will take effect after an reload.
However, this is no longer working on 4.7.
I notice the doc says:

Starting with Solr4.0, the RELOAD command is implemented in a way that results 
a live reloads of the SolrCore, reusing the existing various objects such as 
the SolrIndexWriter. As a result, some configuration options can not be changed 
and made active with a simple RELOAD...

IndexWriter related settings in indexConfig
dataDir location
---

Not sure if this includes properties defined in solrcore.properties file.

Gary



 solrcore.properties is not reloaded when core is reloaded
 -

 Key: SOLR-5931
 URL: https://issues.apache.org/jira/browse/SOLR-5931
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.7
Reporter: Gunnlaugur Thor Briem
Assignee: Shalin Shekhar Mangar
Priority: Minor

 When I change solrcore.properties for a core, and then reload the core, the 
 previous values of the properties in that file are still in effect. If I 
 *unload* the core and then add it back, in the “Core Admin” section of the 
 admin UI, then the changes in solrcore.properties do take effect.
 My specific test case is a DataImportHandler where {{db-data-config.xml}} 
 uses a property to decide which DB host to talk to:
 {code:xml}
 dataSource driver=org.postgresql.Driver name=meta 
 url=jdbc:postgresql://${dbhost}/${solr.core.name} .../
 {code}
 When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
 the core, the next dataimport operation still connects to the previous DB 
 host. Reloading the dataimport config does not help. I have to unload the 
 core (or fully restart the whole Solr) for the properties change to take 
 effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5908) Make REQUESTSTATUS call non-blocking and non-blocked

2014-03-31 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5908:
---

Attachment: SOLR-5908.patch

Patch that makes the REQUESTSTATUS Collection API call non-blocking and 
non-blocked.

This no longer sends the requeststatus call to the OverseerCollectionProcessor, 
instead gets the CollectionHandler to handle it directly.

 Make REQUESTSTATUS call non-blocking and non-blocked
 

 Key: SOLR-5908
 URL: https://issues.apache.org/jira/browse/SOLR-5908
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-5908.patch


 Currently REQUESTSTATUS Collection API call is blocked by any other call in 
 the OCP work queue.
 Make it independent and non-blocked/non-blocking.
 This would be handled as a part of having the OCP multi-threaded but I'm 
 opening this issue to explore other possible options of handling this.
 If the final fix happens via SOLR-5681, will resolve it when SOLR-5681 gets 
 resolved.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5938) ConcurrentUpdateSolrServer don't parser the response while response status code isn't 200

2014-03-31 Thread Raintung Li (JIRA)
Raintung Li created SOLR-5938:
-

 Summary: ConcurrentUpdateSolrServer don't parser the response 
while response status code isn't 200
 Key: SOLR-5938
 URL: https://issues.apache.org/jira/browse/SOLR-5938
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
 Environment: one cloud has two server, one shard, one leader one 
replica, send the index into replica server, replica server forward leader 
server.

Reporter: Raintung Li


ConcurrentUpdateSolrServer only give back the error that don't parser the 
response body, you can't get the error reason from remote server. 
EX.
You send the index request to one solr server, this server forward the other 
leader server. forward case invoke the ConcurrentUpdateSolrServer.java, you 
can't get the right error message only check it in the leader server if happen 
error. Actually leader server had sent the error message to the forwarding 
server.




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5938) ConcurrentUpdateSolrServer don't parser the response while response status code isn't 200

2014-03-31 Thread Raintung Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raintung Li updated SOLR-5938:
--

Attachment: SOLR-5938.txt

The file patch

 ConcurrentUpdateSolrServer don't parser the response while response status 
 code isn't 200
 -

 Key: SOLR-5938
 URL: https://issues.apache.org/jira/browse/SOLR-5938
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
 Environment: one cloud has two server, one shard, one leader one 
 replica, send the index into replica server, replica server forward leader 
 server.
Reporter: Raintung Li
  Labels: solrj
 Attachments: SOLR-5938.txt


 ConcurrentUpdateSolrServer only give back the error that don't parser the 
 response body, you can't get the error reason from remote server. 
 EX.
 You send the index request to one solr server, this server forward the other 
 leader server. forward case invoke the ConcurrentUpdateSolrServer.java, you 
 can't get the right error message only check it in the leader server if 
 happen error. Actually leader server had sent the error message to the 
 forwarding server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5938) ConcurrentUpdateSolrServer don't parser the response while response status code isn't 200

2014-03-31 Thread Raintung Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raintung Li updated SOLR-5938:
--

Attachment: SOLR-5938.txt

 ConcurrentUpdateSolrServer don't parser the response while response status 
 code isn't 200
 -

 Key: SOLR-5938
 URL: https://issues.apache.org/jira/browse/SOLR-5938
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
 Environment: one cloud has two server, one shard, one leader one 
 replica, send the index into replica server, replica server forward leader 
 server.
Reporter: Raintung Li
  Labels: solrj
 Attachments: SOLR-5938.txt


 ConcurrentUpdateSolrServer only give back the error that don't parser the 
 response body, you can't get the error reason from remote server. 
 EX.
 You send the index request to one solr server, this server forward the other 
 leader server. forward case invoke the ConcurrentUpdateSolrServer.java, you 
 can't get the right error message only check it in the leader server if 
 happen error. Actually leader server had sent the error message to the 
 forwarding server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5938) ConcurrentUpdateSolrServer don't parser the response while response status code isn't 200

2014-03-31 Thread Raintung Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raintung Li updated SOLR-5938:
--

Attachment: (was: SOLR-5938.txt)

 ConcurrentUpdateSolrServer don't parser the response while response status 
 code isn't 200
 -

 Key: SOLR-5938
 URL: https://issues.apache.org/jira/browse/SOLR-5938
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
 Environment: one cloud has two server, one shard, one leader one 
 replica, send the index into replica server, replica server forward leader 
 server.
Reporter: Raintung Li
  Labels: solrj
 Attachments: SOLR-5938.txt


 ConcurrentUpdateSolrServer only give back the error that don't parser the 
 response body, you can't get the error reason from remote server. 
 EX.
 You send the index request to one solr server, this server forward the other 
 leader server. forward case invoke the ConcurrentUpdateSolrServer.java, you 
 can't get the right error message only check it in the leader server if 
 happen error. Actually leader server had sent the error message to the 
 forwarding server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955022#comment-13955022
 ] 

Shai Erera commented on LUCENE-2446:


This looks really great! +1

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5914) Almost all Solr tests no longer cleanup their temp dirs on Windows

2014-03-31 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955025#comment-13955025
 ] 

Dawid Weiss commented on SOLR-5914:
---

 Anyway, we always seem to have been on the same page but it seems you want to 
 insist that we are not.

We are, I never meant to give you an impression that we're not. Discussion 
leads progress forward. :)

I'll try to address your points above, give me some time though, I think all 
are addressable. I need a spare minute out of work to handle this.


 Almost all Solr tests no longer cleanup their temp dirs on Windows
 --

 Key: SOLR-5914
 URL: https://issues.apache.org/jira/browse/SOLR-5914
 Project: Solr
  Issue Type: Bug
  Components: Tests
Affects Versions: 4.8
Reporter: Uwe Schindler
Assignee: Dawid Weiss
Priority: Critical
 Fix For: 4.8

 Attachments: SOLR-5914 .patch, SOLR-5914 .patch, 
 branch4x-jenkins.png, build-plugin.jpg, trunk-jenkins.png


 Recently the Windows Jenkins Build server has the problem of all-the time 
 running out of disk space. This machine runs 2 workspaces (4.x and trunk) and 
 has initially 8 Gigabytes of free SSD disk space.
 Because of the recently all-the time failing tests, the test framework does 
 not forcefully clean up the J0 working folders after running tests. This 
 leads to the fact, that the workspace is filled with tons of Solr Home dirs. 
 I tried this on my local machine:
 - run ant test
 - go to build/.../test/J0 and watch folders appearing: Almost every test no 
 longer cleans up after shutting down, leaving a million of files there. This 
 is approx 3 to 4 Gigabytes!!!
 In Lucene the folders are correctly removed. This has happened recently, so i 
 think we have some code like ([~erickerickson] !!!):
 {{new Properties().load(new FileInputStream(...))}} that does not close the 
 files. Because of this, the test's afterClass cannot clean up folders 
 anymore. If you look in the test log, you see messages like {{ WARNING: 
 best effort to remove 
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\.\org.apache.solr.cloud.TestShortCircuitedRequests-1395693845226
  FAILED !}} all the time.
 So if anybody committed some changes that might not close files correctly, 
 please fix! Otherwise I have to disable testing on windows - and I will no 
 longer run solr, tests, too: My local computer also uses gigabytes of temp 
 space after running tests!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5560) Clenaup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-5560:
-

 Summary: Clenaup charset handling for Java 7
 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0


As we are now on Java 7, we should cleanup our charset handling to use the 
official constants added by Java 7: {{StandardCharsets}}

This issue is just a small code refactoring, trying to nuke the IOUtils 
constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5560:
--

Summary: Cleanup charset handling for Java 7  (was: Clenaup charset 
handling for Java 7)

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.7-Linux (64bit/jdk1.7.0_60-ea-b10) - Build # 58 - Failure!

2014-03-31 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.7-Linux/58/
Java: 64bit/jdk1.7.0_60-ea-b10 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:33889 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:33889 within 3 ms
at 
__randomizedtesting.SeedInfo.seed([8C41F197BC9FFE7C:DA77F8FCBC09E40]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:148)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:99)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:94)
at 
org.apache.solr.common.cloud.SolrZkClient.init(SolrZkClient.java:85)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:89)
at 
org.apache.solr.cloud.AbstractZkTestCase.buildZooKeeper(AbstractZkTestCase.java:83)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.setUp(AbstractDistribZkTestBase.java:70)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.setUp(AbstractFullDistribZkTestBase.java:200)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.setUp(CloudSolrServerTest.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 

[jira] [Updated] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5560:
--

Attachment: LUCENE-5560.patch

First patch, removing/deprecating {IOUtils.CHARSET_UTF_8}}.

Next will be some Solr constants and hardcoded String variants of charsets.

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955054#comment-13955054
 ] 

Uwe Schindler edited comment on LUCENE-5560 at 3/31/14 9:45 AM:


First patch, removing/deprecating {{IOUtils.CHARSET_UTF_8}}.

Next will be some Solr constants and hardcoded String variants of charsets.


was (Author: thetaphi):
First patch, removing/deprecating {I{OUtils.CHARSET_UTF_8}}.

Next will be some Solr constants and hardcoded String variants of charsets.

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5561) NativeUnixDirectory is broken

2014-03-31 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-5561:
--

 Summary: NativeUnixDirectory is broken
 Key: LUCENE-5561
 URL: https://issues.apache.org/jira/browse/LUCENE-5561
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.8, 5.0


Several things:

  * It assumed ByteBuffer.allocateDirect would be page-aligned, but
that's no longer true in Java 1.7

  * It failed to throw FNFE if a file didn't exist (throw IOExc
instead)

  * It didn't have a default ctor taking File (so it was hard to run
all tests against it)

  * It didn't have a test case

  * Some Javadocs problems

  * I cutover to FilterDirectory

I tried to cutover to BufferedIndexOutput since this is essentially
all that NativeUnixIO is doing ... but it's not simple because BIO
sometimes flushes non-full (non-aligned) buffers even before the end
of the file (its writeBytes method).

I also factored out a BaseDirectoryTestCase, and tried to fold in
generic Directory tests, and added/cutover explicit tests for the
core directory impls.




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5561) NativeUnixDirectory is broken

2014-03-31 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5561:
---

Attachment: LUCENE-5561.patch

Initial patch ... I added TODOs to see if we can remove our custom JNI code and 
use Unsafe / reflection-into-secret-JDK-classes instead but I haven't explored 
that yet ...

 NativeUnixDirectory is broken
 -

 Key: LUCENE-5561
 URL: https://issues.apache.org/jira/browse/LUCENE-5561
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5561.patch


 Several things:
   * It assumed ByteBuffer.allocateDirect would be page-aligned, but
 that's no longer true in Java 1.7
   * It failed to throw FNFE if a file didn't exist (throw IOExc
 instead)
   * It didn't have a default ctor taking File (so it was hard to run
 all tests against it)
   * It didn't have a test case
   * Some Javadocs problems
   * I cutover to FilterDirectory
 I tried to cutover to BufferedIndexOutput since this is essentially
 all that NativeUnixIO is doing ... but it's not simple because BIO
 sometimes flushes non-full (non-aligned) buffers even before the end
 of the file (its writeBytes method).
 I also factored out a BaseDirectoryTestCase, and tried to fold in
 generic Directory tests, and added/cutover explicit tests for the
 core directory impls.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955073#comment-13955073
 ] 

Michael McCandless commented on LUCENE-2446:


+1, this looks wonderful; it gives us end-to-end checksums, like ZFS.

This means, when there is a checksum mis-match, we can be quite certain that 
there's a hardware problem (bit flipper) in the user's env, and not a Lucene 
bug.

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5560:
--

Attachment: LUCENE-5560.patch

New patch, now also replacing horrible amount of String UTF-8 (especially in 
tests)

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch, LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955080#comment-13955080
 ] 

Simon Willnauer commented on LUCENE-2446:
-

this is awesome - robert do you think it would make sense to commit this to a 
branch and let CI run it for a bit. That way we can clean stuff up further if 
need as well? I just wonder though since it's a pretty big patch :)

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955081#comment-13955081
 ] 

Uwe Schindler commented on LUCENE-5560:
---

I will commit this soon, because patch is large and might get out of sync quite 
soon.
*This is not complete, I just did this as quick cleanup!*

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch, LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4470) Support for basic http auth in internal solr requests

2014-03-31 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955084#comment-13955084
 ] 

Per Steffensen commented on SOLR-4470:
--

Regarding the following FIXME in SolrCmdDistributor
{quote}
// FIXME Here it is a problem using StreamingSolrServers which uses 
ConcurrentUpdateSolrServer for its requests. They (currently)
// do not respond errors back, and users of SolrCmdDistributor actually ought 
to get errors back, so that they can eventually
// be reported back to the issuer of the outer request that triggers the 
SolrCmdDistributor requests.
// E.g. if you issue a deleteByQuery() from a client you will not get any 
information back about whether or not it was actually
// carried out successfully throughout the complete Solr cluster. See 
workaround in SecurityDistributed.doAndAssertSolrExeptionFromStreamingSolrServer
{quote}
The text is a little misleading. Actually SolrCmdDistributor, 
StreamingSolrServers and ConcurrentUpdateSolrServers do collect errors and make 
them available for the components using them. Problem is that 
DistributedUpdateProcessor.doFinish does not report the errors back to the 
outside client in case there are more than one error (see comment // TODO - we 
may need to tell about more than one error... in 
DistributedUpdateProcessor.doFinish. The two places in SecurityDistributedTest 
that uses doAndAssertSolrExeptionFromStreamingSolrServer expect to get an 
exception back, but does not because both subrequests made internally by 
SolrCmdDistributor fails, and therefore DistributedUpdateProcessor does not 
report back the errors at all. Therefore I made the hack to pick up the 
exceptions at StreamingSolrServers-level so that I, in SecurityDistributedTest, 
can actually assert that both inner requests fail. I do not know if there is 
more to report - I expect, because of the // TODO - we may need to tell about 
more than one error... comment, that this has already been reported. It is a 
little hard to fix, because you need to create an infrastructure that is able 
to report back multiple errors to the client. We already have that in our 
version of Solr (created that infrastructure when we implemented optimistic 
locking, in order to be able to get e.g. 3 version-conflict-errors back when 
sending a multiple-document-update including 10 documents for update, where 3 
failed and 7 succeeded), but it is a long time since we handed it over to 
Apache Solr (see SOLR-3382). I guess there is nothing left to report - I have 
no problem that you just delete this FIXME

 Support for basic http auth in internal solr requests
 -

 Key: SOLR-4470
 URL: https://issues.apache.org/jira/browse/SOLR-4470
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, multicore, replication (java), SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Jan Høydahl
  Labels: authentication, https, solrclient, solrcloud, ssl
 Fix For: 5.0

 Attachments: SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470_branch_4x_r1452629.patch, 
 SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
 SOLR-4470_trunk_r1568857.patch


 We want to protect any HTTP-resource (url). We want to require credentials no 
 matter what kind of HTTP-request you make to a Solr-node.
 It can faily easy be acheived as described on 
 http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
 also make internal request to other Solr-nodes, and for it to work 
 credentials need to be provided here also.
 Ideally we would like to forward credentials from a particular request to 
 all the internal sub-requests it triggers. E.g. for search and update 
 request.
 But there are also internal requests
 * that only indirectly/asynchronously triggered from outside requests (e.g. 
 shard creation/deletion/etc based on calls to the Collection API)
 * that do not in any way have relation to an outside super-request (e.g. 
 replica synching stuff)
 We would like to aim at a solution where original credentials are 
 forwarded when a request directly/synchronously trigger a subrequest, and 
 fallback to a configured internal credentials for the 
 asynchronous/non-rooted requests.
 In our solution we would aim at only supporting basic http auth, but we would 
 like to make a framework around it, so that not to much refactoring is 
 needed if you later want to make support for other kinds of auth (e.g. digest)
 We will work at a solution but create this JIRA issue early in order to get 
 input/comments from the community as early as 

[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-31 Thread Gunnlaugur Thor Briem (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955085#comment-13955085
 ] 

Gunnlaugur Thor Briem commented on SOLR-5931:
-

I can't say for sure whether this worked earlier (I only just started to use 
solrcore.properties to control this, at the same time as I was upgrading 
exactly like Gary, from 3.5 to 4.7).

In any case, I do consider it a bug, even if only in the sense of “violating 
the principle of least astonishment.” : ) Applying configuration changes is the 
whole point of reloading the core, for me at least — and changes in 
solrconfig.xml and schema.xml and db-data-config.xml do get applied, so it 
seems incongruous for solrcore.properties to be different.

 solrcore.properties is not reloaded when core is reloaded
 -

 Key: SOLR-5931
 URL: https://issues.apache.org/jira/browse/SOLR-5931
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.7
Reporter: Gunnlaugur Thor Briem
Assignee: Shalin Shekhar Mangar
Priority: Minor

 When I change solrcore.properties for a core, and then reload the core, the 
 previous values of the properties in that file are still in effect. If I 
 *unload* the core and then add it back, in the “Core Admin” section of the 
 admin UI, then the changes in solrcore.properties do take effect.
 My specific test case is a DataImportHandler where {{db-data-config.xml}} 
 uses a property to decide which DB host to talk to:
 {code:xml}
 dataSource driver=org.postgresql.Driver name=meta 
 url=jdbc:postgresql://${dbhost}/${solr.core.name} .../
 {code}
 When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
 the core, the next dataimport operation still connects to the previous DB 
 host. Reloading the dataimport config does not help. I have to unload the 
 core (or fully restart the whole Solr) for the properties change to take 
 effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5560:
--

Attachment: LUCENE-5560.patch

Small updates and fix of comment typo.

Running tests...

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955091#comment-13955091
 ] 

Robert Muir commented on LUCENE-5560:
-

thanks for doing this!

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955092#comment-13955092
 ] 

Michael McCandless commented on LUCENE-2446:


I think the patch is quite solid as is ... we should just commit  let it bake 
for a while on trunk?  We can iterate on further improvements...

I just ran distributed beasting from luceneutil (runs all Lucene core + modules 
tests, across 5 machines, 28 paralle, JVMs) ~150 times over and it didn't hit 
any test failures.

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5560:
--

Attachment: LUCENE-5560.patch

After running tests, I fixed a test bug: TypeAsPayloadTokenFilterTest was using 
BytesRef incorrectly (ignoring offset and length). This was not seen before, 
because the filter created the payload in an ineffective way, too.

Also I fixed a problem I introduced in the Maven Dep Checker (my fault).

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
 LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Bert Sanders (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955103#comment-13955103
 ] 

Bert Sanders commented on LUCENE-2446:
--

Is CRC32 intrinsic available and the same on all relevant platforms ?
(I assume it is CRC32C version of Intel)

If not, what happens ? Does it still work ? 
Could it be a concern for performance-related issue ?

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Bert Sanders (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955103#comment-13955103
 ] 

Bert Sanders edited comment on LUCENE-2446 at 3/31/14 11:20 AM:


Is CRC32 intrinsic available and the same on all relevant platforms ?
(I assume it is CRC32C version of Intel)

If not, what happens ? Does it still work ? Does it need to be 
cross-platform-compatible ?
Could it be a concern for performance-related issue ?


was (Author: mrbsd):
Is CRC32 intrinsic available and the same on all relevant platforms ?
(I assume it is CRC32C version of Intel)

If not, what happens ? Does it still work ? 
Could it be a concern for performance-related issue ?

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955109#comment-13955109
 ] 

ASF subversion and git services commented on LUCENE-5560:
-

Commit 1583302 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1583302 ]

LUCENE-5560: Cleanup charset handling for Java 7

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
 LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955111#comment-13955111
 ] 

Robert Muir commented on LUCENE-2446:
-

java.util.zip.CRC32 has been used by lucene for the segments_N for quite some 
time.
This just applies it to other files. And while is nice that it is 3x faster in 
java 8, it is still 1GB/second with java 7 on my computer.
By the way, the instruction used in java8 is not CRC32C. it is PCLMULQDQ. If 
they broke this, it would be a big bug in java :)

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.1 RC2

2014-03-31 Thread Simon Willnauer
+1

SUCCESS! [1:17:51.403337]


On Mon, Mar 31, 2014 at 8:30 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
 +1

 SUCCESS! [1:14:57.063621]

 On Sat, Mar 29, 2014 at 2:16 PM, Steve Rowe sar...@gmail.com wrote:
 Please vote for the second Release Candidate for Lucene/Solr 4.7.1.

 Download it here:
 https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/

 Smoke tester cmdline (from the lucene_solr_4_7 branch):

 python3.2 -u dev-tools/scripts/smokeTestRelease.py \
 https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/
  \
 1582953 4.7.1 /tmp/4.7.1-smoke

 The smoke tester passed for me: SUCCESS! [0:50:29.936732]

 My vote: +1

 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




 --
 Regards,
 Shalin Shekhar Mangar.

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955129#comment-13955129
 ] 

Simon Willnauer commented on LUCENE-2446:
-

I glanced at the patch again I think it looks pretty good. I didn't look close 
enough before! +1 to move that in and let it bake

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-31 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955131#comment-13955131
 ] 

Erick Erickson commented on SOLR-5931:
--

I'm pretty sure this never worked. That said, I agree that it _should_ work.

Shalin:
So dont' waste any time looking for what changed to break this functionality, 
I strongly doubt you'll find anything like that. This is just new 
functionality...



 solrcore.properties is not reloaded when core is reloaded
 -

 Key: SOLR-5931
 URL: https://issues.apache.org/jira/browse/SOLR-5931
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.7
Reporter: Gunnlaugur Thor Briem
Assignee: Shalin Shekhar Mangar
Priority: Minor

 When I change solrcore.properties for a core, and then reload the core, the 
 previous values of the properties in that file are still in effect. If I 
 *unload* the core and then add it back, in the “Core Admin” section of the 
 admin UI, then the changes in solrcore.properties do take effect.
 My specific test case is a DataImportHandler where {{db-data-config.xml}} 
 uses a property to decide which DB host to talk to:
 {code:xml}
 dataSource driver=org.postgresql.Driver name=meta 
 url=jdbc:postgresql://${dbhost}/${solr.core.name} .../
 {code}
 When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
 the core, the next dataimport operation still connects to the previous DB 
 host. Reloading the dataimport config does not help. I have to unload the 
 core (or fully restart the whole Solr) for the properties change to take 
 effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955132#comment-13955132
 ] 

ASF subversion and git services commented on LUCENE-5560:
-

Commit 1583315 from [~thetaphi] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583315 ]

Merged revision(s) 1583302 from lucene/dev/trunk:
LUCENE-5560: Cleanup charset handling for Java 7

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
 LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-31 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955131#comment-13955131
 ] 

Erick Erickson edited comment on SOLR-5931 at 3/31/14 12:30 PM:


I'm pretty sure this never worked. That said, I agree that it _should_ work.

Shalin:
So dont' waste any time looking for what changed to break this functionality, 
I strongly doubt you'll find anything like that. This is just new 
functionality...

[~romseygeek] do you agree?




was (Author: erickerickson):
I'm pretty sure this never worked. That said, I agree that it _should_ work.

Shalin:
So dont' waste any time looking for what changed to break this functionality, 
I strongly doubt you'll find anything like that. This is just new 
functionality...



 solrcore.properties is not reloaded when core is reloaded
 -

 Key: SOLR-5931
 URL: https://issues.apache.org/jira/browse/SOLR-5931
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.7
Reporter: Gunnlaugur Thor Briem
Assignee: Shalin Shekhar Mangar
Priority: Minor

 When I change solrcore.properties for a core, and then reload the core, the 
 previous values of the properties in that file are still in effect. If I 
 *unload* the core and then add it back, in the “Core Admin” section of the 
 admin UI, then the changes in solrcore.properties do take effect.
 My specific test case is a DataImportHandler where {{db-data-config.xml}} 
 uses a property to decide which DB host to talk to:
 {code:xml}
 dataSource driver=org.postgresql.Driver name=meta 
 url=jdbc:postgresql://${dbhost}/${solr.core.name} .../
 {code}
 When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
 the core, the next dataimport operation still connects to the previous DB 
 host. Reloading the dataimport config does not help. I have to unload the 
 core (or fully restart the whole Solr) for the properties change to take 
 effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5560.
---

Resolution: Fixed

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
 LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955138#comment-13955138
 ] 

Uwe Schindler commented on LUCENE-2446:
---

Great!

I am not so happy about the method name {{validate()}}. Especially as it is in 
AtomicReader, people might think (because of name), its just some cheap check. 
Yes, I know there are Javadocs, but IDEs suggestions may lead to use it!

I have no yet thought thoroghly about a better name, maybe {{checkIntegrity()}}?

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955141#comment-13955141
 ] 

Uwe Schindler commented on LUCENE-2446:
---

bq. Is CRC32 intrinsic available and the same on all relevant platforms ?

Does not matter what the JVM does internally. The public API for CRC32 is there 
in the java.util.zip package and we use it since the early beginning of Lucene. 
If it would be suddenly return incorrect/different results, it would violate 
ZIP spec (which is indirectly an ISO standard - through OpenDocument ISO 
standard).

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955142#comment-13955142
 ] 

Erick Erickson commented on LUCENE-2446:


bq: I am not so happy about the method name

Why not a method name that indicates the function used? Something like 
checkIntegrityUsingRobertsNiftyNewCRC32Code?

Hmmm, a little long, but what about something like
validateCRC32
or
checkIntegrityCRC32

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955143#comment-13955143
 ] 

Uwe Schindler commented on LUCENE-2446:
---

Although its currently only doing file integrity checks, we may extend this 
later! Maybe remove more parts of CheckIndex class and move it to Codec level?

Another idea: Move it up to IndexReader, so it would also work on 
DirectoryReaders? This would help with cleaning up stuff from CheckIndex.

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955143#comment-13955143
 ] 

Uwe Schindler edited comment on LUCENE-2446 at 3/31/14 12:44 PM:
-

Although its currently only doing file integrity checks, we may extend this 
later! Maybe remove more parts of CheckIndex class and move it to Codec level?

Another idea: Move {{validate()/checkIntegrity()}} up to IndexReader, so it 
would also work on DirectoryReaders? This would help with cleaning up stuff 
from CheckIndex.


was (Author: thetaphi):
Although its currently only doing file integrity checks, we may extend this 
later! Maybe remove more parts of CheckIndex class and move it to Codec level?

Another idea: Move it up to IndexReader, so it would also work on 
DirectoryReaders? This would help with cleaning up stuff from CheckIndex.

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5939) Wrong request potentially on Error from StreamingSolrServer

2014-03-31 Thread Per Steffensen (JIRA)
Per Steffensen created SOLR-5939:


 Summary: Wrong request potentially on Error from 
StreamingSolrServer
 Key: SOLR-5939
 URL: https://issues.apache.org/jira/browse/SOLR-5939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Per Steffensen


In _StreamingSolrServer.getSolrServer|ConcurrentUpdateSolrServer.handleError_ 
the _SolrCmdDistributor.Req req_ parameter is used for the _req_ field of all 
_error_'s created. This is also true for subsequent requests sent through the 
retuned ConcurrentUpdateSolrServer. This means a.o. that wrong request (the 
first request sent through this _ConcurrentUpdateSolrServer_) may be retried if 
case of errors executing one of the subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Ahmet Arslan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955145#comment-13955145
 ] 

Ahmet Arslan commented on LUCENE-5560:
--

Hi Uwe, is this lucene-only cleanup? Do solr classes need a separate issue?

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
 LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5939) Wrong request potentially on Error from StreamingSolrServer

2014-03-31 Thread Per Steffensen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Per Steffensen updated SOLR-5939:
-

Attachment: SOLR-5939_demo_problem.patch

Try running the test-suite (or just _BasicDistributedZkTest_) with attached 
patch SOLR-5939_demo_problem.patch. It is not supposed to be committed - just 
demonstrate the problem by showing where problems would have occurred if the 
request had resulted in errors.

 Wrong request potentially on Error from StreamingSolrServer
 ---

 Key: SOLR-5939
 URL: https://issues.apache.org/jira/browse/SOLR-5939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Per Steffensen
  Labels: error, retry
 Attachments: SOLR-5939_demo_problem.patch


 In _StreamingSolrServer.getSolrServer|ConcurrentUpdateSolrServer.handleError_ 
 the _SolrCmdDistributor.Req req_ parameter is used for the _req_ field of all 
 _error_'s created. This is also true for subsequent requests sent through the 
 retuned ConcurrentUpdateSolrServer. This means a.o. that wrong request (the 
 first request sent through this _ConcurrentUpdateSolrServer_) may be retried 
 if case of errors executing one of the subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4470) Support for basic http auth in internal solr requests

2014-03-31 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955148#comment-13955148
 ] 

Per Steffensen commented on SOLR-4470:
--

Regarding the following FIXME in StreamingSolrServers
{quote}
// FIXME Giving it the req here to use for created errors will not work, 
because this reg will
// be used on errors for all future requests sent to the same URL. Resulting in 
this first req
// for this URL to be resubmitted in SolrCmdDistributor.doRetriesIfNeeded when 
subsequent
// different requests for the same URL fail
{quote}
See SOLR-5939

 Support for basic http auth in internal solr requests
 -

 Key: SOLR-4470
 URL: https://issues.apache.org/jira/browse/SOLR-4470
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, multicore, replication (java), SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Jan Høydahl
  Labels: authentication, https, solrclient, solrcloud, ssl
 Fix For: 5.0

 Attachments: SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470_branch_4x_r1452629.patch, 
 SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
 SOLR-4470_trunk_r1568857.patch


 We want to protect any HTTP-resource (url). We want to require credentials no 
 matter what kind of HTTP-request you make to a Solr-node.
 It can faily easy be acheived as described on 
 http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
 also make internal request to other Solr-nodes, and for it to work 
 credentials need to be provided here also.
 Ideally we would like to forward credentials from a particular request to 
 all the internal sub-requests it triggers. E.g. for search and update 
 request.
 But there are also internal requests
 * that only indirectly/asynchronously triggered from outside requests (e.g. 
 shard creation/deletion/etc based on calls to the Collection API)
 * that do not in any way have relation to an outside super-request (e.g. 
 replica synching stuff)
 We would like to aim at a solution where original credentials are 
 forwarded when a request directly/synchronously trigger a subrequest, and 
 fallback to a configured internal credentials for the 
 asynchronous/non-rooted requests.
 In our solution we would aim at only supporting basic http auth, but we would 
 like to make a framework around it, so that not to much refactoring is 
 needed if you later want to make support for other kinds of auth (e.g. digest)
 We will work at a solution but create this JIRA issue early in order to get 
 input/comments from the community as early as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955154#comment-13955154
 ] 

Robert Muir commented on LUCENE-2446:
-

I don't want to do this Uwe. I dont want all this checkindex stuff being run at 
merge. A big point here is to provide an option to prevent detection on merge.

If you read my description I am unhappy about the current situation because it 
cannot be enabled by default for performance reasons. I want to fix the codec 
apis so this is no longer the case. I don't want checkindex code moved into 
this method.

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-31 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955153#comment-13955153
 ] 

Alan Woodward commented on SOLR-5931:
-

I don't think it would have worked when I started looking at this code, but I 
can't speak for what it did before...

I guess what needs to be done is to add a 'reload' method to CoresLocator which 
will update the CoreDescriptor - at the moment CDs are immutable across the 
lifetime of the CoreContainer, except for create() and unload() calls (which is 
why this works if you unload and then create the core).

 solrcore.properties is not reloaded when core is reloaded
 -

 Key: SOLR-5931
 URL: https://issues.apache.org/jira/browse/SOLR-5931
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.7
Reporter: Gunnlaugur Thor Briem
Assignee: Shalin Shekhar Mangar
Priority: Minor

 When I change solrcore.properties for a core, and then reload the core, the 
 previous values of the properties in that file are still in effect. If I 
 *unload* the core and then add it back, in the “Core Admin” section of the 
 admin UI, then the changes in solrcore.properties do take effect.
 My specific test case is a DataImportHandler where {{db-data-config.xml}} 
 uses a property to decide which DB host to talk to:
 {code:xml}
 dataSource driver=org.postgresql.Driver name=meta 
 url=jdbc:postgresql://${dbhost}/${solr.core.name} .../
 {code}
 When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
 the core, the next dataimport operation still connects to the previous DB 
 host. Reloading the dataimport config does not help. I have to unload the 
 core (or fully restart the whole Solr) for the properties change to take 
 effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955162#comment-13955162
 ] 

Uwe Schindler commented on LUCENE-2446:
---

Robert, sorry if I throwed some unrelated ideas in here: My idea here has not 
much to do with the stuffs here. It just came into my mind when I was seeing 
the validate() method which made me afraid.

Now just throwing in something, which does not have to affect *you* while 
implementing checksumming - just think about it as a separate issue, inspired 
by this one:

{panel:title=Uwe's unrelated 
idea|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
In general CheckIndex as a separate class is wrong in my opinion. Especially if 
it does checks not valid for all codecs (using instanceof checks in CheckIndex 
code - horrible). A really good CheckIndex should work like fsck - implemented 
by the file system driver + filesystem code. So the checks currently done by 
CheckIndex should be done by the codecs (only general checks may work on top - 
like numDocs checks, etc.)
{panel}

Now back to this issue: To come back to the initial suggestion: validate = 
checkIntegrity. What do you think?

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955167#comment-13955167
 ] 

Uwe Schindler commented on LUCENE-5560:
---

bq. Hi Uwe, is this lucene-only cleanup? Do solr classes need a separate issue?

Look at the patch. I refactored many thing of Solr, too. The remaining problems 
are only caused outdated commons-io.jar file, which does not support java 
Charset instances. In my opinion, we should nuke commons-io completely from 
Solr.

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
 LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4470) Support for basic http auth in internal solr requests

2014-03-31 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955165#comment-13955165
 ] 

Per Steffensen commented on SOLR-4470:
--

bq. Do we really need to pass credentials around everywhere?

They really are not passed around

bq. What David Webster described sounds like a much cleaner approach?

Do you really think so!?! IMHO it sounds much more complicated. The patch to 
SOLR-4470 is really very simple with respect to changes in non-test code, it 
will make it very easy to setup addition of credentials to outgoing requests, 
it does not require any other components then Solr running in your 
infrastructure and it will not require include and config of any 3rd-party 
libraries.

bq. The problem is on the Sending side (client)

This patch only deals with the sending side. With respect to doing the actual 
authentication and authorization of ingoing requests it is not handled by Solr 
(and probably shouldnt). As long as we run in a servlet-container (e.g. Tomcat, 
but also all other certified servlet-containers) you can set up you security in 
web.xml and use common LoginModules (or customize your own if you want to).

bq. That involved four lines of code in the actual SOLR code ...

This patch is not much more than those four lines, except that we support 
setting up credentials in solr.xml and to calculate credentials given the 
super-request. Besides that just thorough testing.

bq. Now, if they ever move SOLR out of the Servlet container into a stand alone 
implementation.we have a problem with our approach, and have to take this 
patch's full approach.

If they move Solr out of the servlet-container hopefully they support some 
other way of setting up protection against ingoing requests. But this patch 
will not help you. This issue is only about adding credentials to outgoing 
requests.

 Support for basic http auth in internal solr requests
 -

 Key: SOLR-4470
 URL: https://issues.apache.org/jira/browse/SOLR-4470
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, multicore, replication (java), SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Jan Høydahl
  Labels: authentication, https, solrclient, solrcloud, ssl
 Fix For: 5.0

 Attachments: SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470_branch_4x_r1452629.patch, 
 SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
 SOLR-4470_trunk_r1568857.patch


 We want to protect any HTTP-resource (url). We want to require credentials no 
 matter what kind of HTTP-request you make to a Solr-node.
 It can faily easy be acheived as described on 
 http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
 also make internal request to other Solr-nodes, and for it to work 
 credentials need to be provided here also.
 Ideally we would like to forward credentials from a particular request to 
 all the internal sub-requests it triggers. E.g. for search and update 
 request.
 But there are also internal requests
 * that only indirectly/asynchronously triggered from outside requests (e.g. 
 shard creation/deletion/etc based on calls to the Collection API)
 * that do not in any way have relation to an outside super-request (e.g. 
 replica synching stuff)
 We would like to aim at a solution where original credentials are 
 forwarded when a request directly/synchronously trigger a subrequest, and 
 fallback to a configured internal credentials for the 
 asynchronous/non-rooted requests.
 In our solution we would aim at only supporting basic http auth, but we would 
 like to make a framework around it, so that not to much refactoring is 
 needed if you later want to make support for other kinds of auth (e.g. digest)
 We will work at a solution but create this JIRA issue early in order to get 
 input/comments from the community as early as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955170#comment-13955170
 ] 

Shai Erera commented on LUCENE-2446:


I think {{checkIntegrity}} is too generic (i.e. it does not check the integrity 
of the posting lists, a'la what CheckIndex does. How about 
{{validateChecksums}}, since that's what it currently does? I don't mind if 
it's renamed in the future if the method validates more things, we shouldn't 
worry about back-compat with this API.

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-03-31 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955173#comment-13955173
 ] 

Robert Muir commented on LUCENE-2446:
-

You guys figure out the name for the method, i really dont care. I will wait on 
the issue until you guys bikeshed it out. The only thing i care about is that 
its not mixed up with other logic.

Its really important when debugging a corrupt index that you can know that the 
file itself is corrupted (e.g. hardware) versus some bug in lucene. Its also 
really important that there is at least the option (like there is in this 
patch) to prevent this corruption from being propagated in merge. So lets 
please not mix unrelated stuff in here. 

Things like refactoring checkindex or whatever: please just open a separate 
issue for that :)

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4470) Support for basic http auth in internal solr requests

2014-03-31 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955175#comment-13955175
 ] 

Per Steffensen commented on SOLR-4470:
--

Sorry about the slow response, [~janhoy], but I am very busy making sure Solrs 
do not lose their ZooKeeper connections under high load.

 Support for basic http auth in internal solr requests
 -

 Key: SOLR-4470
 URL: https://issues.apache.org/jira/browse/SOLR-4470
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, multicore, replication (java), SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Jan Høydahl
  Labels: authentication, https, solrclient, solrcloud, ssl
 Fix For: 5.0

 Attachments: SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470_branch_4x_r1452629.patch, 
 SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
 SOLR-4470_trunk_r1568857.patch


 We want to protect any HTTP-resource (url). We want to require credentials no 
 matter what kind of HTTP-request you make to a Solr-node.
 It can faily easy be acheived as described on 
 http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
 also make internal request to other Solr-nodes, and for it to work 
 credentials need to be provided here also.
 Ideally we would like to forward credentials from a particular request to 
 all the internal sub-requests it triggers. E.g. for search and update 
 request.
 But there are also internal requests
 * that only indirectly/asynchronously triggered from outside requests (e.g. 
 shard creation/deletion/etc based on calls to the Collection API)
 * that do not in any way have relation to an outside super-request (e.g. 
 replica synching stuff)
 We would like to aim at a solution where original credentials are 
 forwarded when a request directly/synchronously trigger a subrequest, and 
 fallback to a configured internal credentials for the 
 asynchronous/non-rooted requests.
 In our solution we would aim at only supporting basic http auth, but we would 
 like to make a framework around it, so that not to much refactoring is 
 needed if you later want to make support for other kinds of auth (e.g. digest)
 We will work at a solution but create this JIRA issue early in order to get 
 input/comments from the community as early as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4470) Support for basic http auth in internal solr requests

2014-03-31 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955165#comment-13955165
 ] 

Per Steffensen edited comment on SOLR-4470 at 3/31/14 1:37 PM:
---

bq. Do we really need to pass credentials around everywhere?

They really are not passed around

bq. What David Webster described sounds like a much cleaner approach?

Do you really think so!?! IMHO it sounds much more complicated. The patch to 
SOLR-4470 is really very simple with respect to changes in non-test code, it 
will make it very easy to setup addition of credentials to outgoing requests, 
it does not require any other components than Solr running in your 
infrastructure and it will not require include and config of any 3rd-party 
libraries.

bq. The problem is on the Sending side (client)

This patch only deals with the sending side. With respect to doing the actual 
authentication and authorization of ingoing requests it is not handled by Solr 
(and probably shouldnt). As long as we run in a servlet-container (e.g. Tomcat, 
but also all other certified servlet-containers) you can set up you security in 
web.xml and use common LoginModules (or customize your own if you want to).

bq. That involved four lines of code in the actual SOLR code ...

This patch is not much more than those four lines, except that we support 
setting up credentials in solr.xml and to calculate credentials given the 
super-request. Besides that just thorough testing.

bq. Now, if they ever move SOLR out of the Servlet container into a stand alone 
implementation.we have a problem with our approach, and have to take this 
patch's full approach.

If they move Solr out of the servlet-container hopefully they support some 
other way of setting up protection against ingoing requests. But this patch 
will not help you. This issue is only about adding credentials to outgoing 
requests.


was (Author: steff1193):
bq. Do we really need to pass credentials around everywhere?

They really are not passed around

bq. What David Webster described sounds like a much cleaner approach?

Do you really think so!?! IMHO it sounds much more complicated. The patch to 
SOLR-4470 is really very simple with respect to changes in non-test code, it 
will make it very easy to setup addition of credentials to outgoing requests, 
it does not require any other components then Solr running in your 
infrastructure and it will not require include and config of any 3rd-party 
libraries.

bq. The problem is on the Sending side (client)

This patch only deals with the sending side. With respect to doing the actual 
authentication and authorization of ingoing requests it is not handled by Solr 
(and probably shouldnt). As long as we run in a servlet-container (e.g. Tomcat, 
but also all other certified servlet-containers) you can set up you security in 
web.xml and use common LoginModules (or customize your own if you want to).

bq. That involved four lines of code in the actual SOLR code ...

This patch is not much more than those four lines, except that we support 
setting up credentials in solr.xml and to calculate credentials given the 
super-request. Besides that just thorough testing.

bq. Now, if they ever move SOLR out of the Servlet container into a stand alone 
implementation.we have a problem with our approach, and have to take this 
patch's full approach.

If they move Solr out of the servlet-container hopefully they support some 
other way of setting up protection against ingoing requests. But this patch 
will not help you. This issue is only about adding credentials to outgoing 
requests.

 Support for basic http auth in internal solr requests
 -

 Key: SOLR-4470
 URL: https://issues.apache.org/jira/browse/SOLR-4470
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, multicore, replication (java), SolrCloud
Affects Versions: 4.0
Reporter: Per Steffensen
Assignee: Jan Høydahl
  Labels: authentication, https, solrclient, solrcloud, ssl
 Fix For: 5.0

 Attachments: SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, SOLR-4470.patch, 
 SOLR-4470.patch, SOLR-4470_branch_4x_r1452629.patch, 
 SOLR-4470_branch_4x_r1452629.patch, SOLR-4470_branch_4x_r145.patch, 
 SOLR-4470_trunk_r1568857.patch


 We want to protect any HTTP-resource (url). We want to require credentials no 
 matter what kind of HTTP-request you make to a Solr-node.
 It can faily easy be acheived as described on 
 http://wiki.apache.org/solr/SolrSecurity. This problem is that Solr-nodes 
 also make internal request to other Solr-nodes, and for it to work 
 credentials need 

[jira] [Updated] (SOLR-5894) Speed up high-cardinality facets with sparse counters

2014-03-31 Thread Toke Eskildsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toke Eskildsen updated SOLR-5894:
-

Attachment: sparse_500docs_20140331-151918_single.png
sparse_500docs_20140331-151918_multi.png
SOLR-5894_test.zip
SOLR-5894.patch

Patch-update: Sparse counting is now possible for field cache, with and without 
DocValues, single- as well as multi-field. Segment based field cache faceting 
support is not implemented yet. 

See the attached graphs sparse_500docs_20140331-151918_multi.png and 
sparse_500docs_20140331-151918_single.png for experiments with 5M values, 
multi-value (3 values/doc) and single-value. Note that the y-axis is now 
logarithmic. It seems DocValues benefits nearly as much from sparse counters as 
non-DocValues.

Also note that these measurements are from an artificially large sparse tracker 
(100% overhead) and as such are not representative for a realistic setup.

 Speed up high-cardinality facets with sparse counters
 -

 Key: SOLR-5894
 URL: https://issues.apache.org/jira/browse/SOLR-5894
 Project: Solr
  Issue Type: Improvement
  Components: SearchComponents - other
Affects Versions: 4.6.1, 4.7
Reporter: Toke Eskildsen
Priority: Minor
 Fix For: 4.6.1

 Attachments: SOLR-5894.patch, SOLR-5894.patch, SOLR-5894.patch, 
 SOLR-5894.patch, SOLR-5894.patch, SOLR-5894_test.zip, SOLR-5894_test.zip, 
 SOLR-5894_test.zip, author_7M_tags_1852_logged_queries_warmed.png, 
 sparse_500docs_20140331-151918_multi.png, 
 sparse_500docs_20140331-151918_single.png, 
 sparse_5051docs_20140328-152807.png


 Field based faceting in Solr has two phases: Collecting counts for tags in 
 facets and extracting the requested tags.
 The execution time for the collecting phase is approximately linear to the 
 number of hits and the number of references from hits to tags. This phase is 
 not the focus here.
 The extraction time scales with the number of unique tags in the search 
 result, but is also heavily influenced by the total number of unique tags in 
 the facet as every counter, 0 or not, is visited by the extractor (at least 
 for count order). For fields with millions of unique tag values this means 
 10s of milliseconds added to the minimum response time (see 
 https://sbdevel.wordpress.com/2014/03/18/sparse-facet-counting-on-a-real-index/
  for a test on a corpus with 7M unique values in the facet).
 The extractor needs to visit every counter due to the current counter 
 structure being a plain int-array of size #unique_tags. Switching to a sparse 
 structure, where only the tag counters  0 are visited, makes the extraction 
 time linear to the number of unique tags in the result set.
 Unfortunately the number of unique tags in the result set is unknown at 
 collect time, so it is not possible to reliably select sparse counting vs. 
 full counting up front. Luckily there exists solutions for sparse sets that 
 has the property of switching to non-sparse-mode without a switch-penalty, 
 when the sparse-threshold is exceeded (see 
 http://programmingpraxis.com/2012/03/09/sparse-sets/ for an example). This 
 JIRA aims to implement this functionality in Solr (a proof of concept patch 
 will be provided shortly).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5562) LuceneSuggester does not work on Android

2014-03-31 Thread Giovanni Cuccu (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Cuccu updated LUCENE-5562:
---

Attachment: Sort.java

 LuceneSuggester does not work on Android
 

 Key: LUCENE-5562
 URL: https://issues.apache.org/jira/browse/LUCENE-5562
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
 Environment: Android 4.4.2
Reporter: Giovanni Cuccu
Priority: Minor
 Attachments: AnalyzingSuggester.java, Sort.java


 I'm developing an application on android and I'm using lucene for indexing 
 and searching. When I try to use AnalyzingSuggester (even the Fuzzy version) 
 I got an Exception the BufferedOutputStream is already closed.
 I tracked the problem and it seems that in
 org.apache.lucene.search.suggest.Sort
 and in org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester
 the outputstream is closed twice hence the exception on android. 
 The same code on windows runs without a problem.
 It seems that the Android jvm does some additional checks. I attach two 
 patche files, the classes close the output stream once. (check for 
 writerClosed in the code to see what I did)  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5562) LuceneSuggester does not work on Android

2014-03-31 Thread Giovanni Cuccu (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Cuccu updated LUCENE-5562:
---

Attachment: AnalyzingSuggester.java

 LuceneSuggester does not work on Android
 

 Key: LUCENE-5562
 URL: https://issues.apache.org/jira/browse/LUCENE-5562
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
 Environment: Android 4.4.2
Reporter: Giovanni Cuccu
Priority: Minor
 Attachments: AnalyzingSuggester.java, Sort.java


 I'm developing an application on android and I'm using lucene for indexing 
 and searching. When I try to use AnalyzingSuggester (even the Fuzzy version) 
 I got an Exception the BufferedOutputStream is already closed.
 I tracked the problem and it seems that in
 org.apache.lucene.search.suggest.Sort
 and in org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester
 the outputstream is closed twice hence the exception on android. 
 The same code on windows runs without a problem.
 It seems that the Android jvm does some additional checks. I attach two 
 patche files, the classes close the output stream once. (check for 
 writerClosed in the code to see what I did)  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5562) LuceneSuggester does not work on Android

2014-03-31 Thread Giovanni Cuccu (JIRA)
Giovanni Cuccu created LUCENE-5562:
--

 Summary: LuceneSuggester does not work on Android
 Key: LUCENE-5562
 URL: https://issues.apache.org/jira/browse/LUCENE-5562
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
 Environment: Android 4.4.2
Reporter: Giovanni Cuccu
Priority: Minor
 Attachments: AnalyzingSuggester.java, Sort.java

I'm developing an application on android and I'm using lucene for indexing and 
searching. When I try to use AnalyzingSuggester (even the Fuzzy version) I got 
an Exception the BufferedOutputStream is already closed.
I tracked the problem and it seems that in
org.apache.lucene.search.suggest.Sort
and in org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester
the outputstream is closed twice hence the exception on android. 
The same code on windows runs without a problem.
It seems that the Android jvm does some additional checks. I attach two patche 
files, the classes close the output stream once. (check for writerClosed in the 
code to see what I did)  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5562) LuceneSuggester does not work on Android

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955184#comment-13955184
 ] 

Uwe Schindler commented on LUCENE-5562:
---

This is not a bug in Lucene. The Java Closeable interface states in 
[http://docs.oracle.com/javase/7/docs/api/java/io/Closeable.html#close()]:

bq. Closes this stream and releases any system resources associated with it. If 
the stream is already closed then invoking this method has no effect.

If the implementation on Android does not implement this correctly, it is not a 
problem of Lucene.

 LuceneSuggester does not work on Android
 

 Key: LUCENE-5562
 URL: https://issues.apache.org/jira/browse/LUCENE-5562
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
 Environment: Android 4.4.2
Reporter: Giovanni Cuccu
Priority: Minor
 Attachments: AnalyzingSuggester.java, Sort.java


 I'm developing an application on android and I'm using lucene for indexing 
 and searching. When I try to use AnalyzingSuggester (even the Fuzzy version) 
 I got an Exception the BufferedOutputStream is already closed.
 I tracked the problem and it seems that in
 org.apache.lucene.search.suggest.Sort
 and in org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester
 the outputstream is closed twice hence the exception on android. 
 The same code on windows runs without a problem.
 It seems that the Android jvm does some additional checks. I attach two 
 patche files, the classes close the output stream once. (check for 
 writerClosed in the code to see what I did)  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5473) Make one state.json per collection

2014-03-31 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5473:


Attachment: SOLR-5473-74.patch

All solr core tests pass with this patch. However, there is a SolrJ test 
failure in CloudSolrServerTest on asserts added by SOLR-5715

{code}
  [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=CloudSolrServerTest 
-Dtests.method=testDistribSearch -Dtests.seed=5FAC2B1757C387B3 
-Dtests.slow=true -Dtests.locale=sv_SE -Dtests.timezone=Pacific/Samoa 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 22.1s J1 | CloudSolrServerTest.testDistribSearch 
   [junit4] Throwable #1: java.lang.AssertionError: Unexpected number of 
requests to expected URLs expected:6 but was:0
   [junit4]at 
__randomizedtesting.SeedInfo.seed([5FAC2B1757C387B3:DE4AA50F209CE78F]:0)
   [junit4]at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.doTest(CloudSolrServerTest.java:300)
   [junit4]at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
   [junit4]at java.lang.Thread.run(Thread.java:744)
   [junit4]   2 26918 T10 oas.SolrTestCaseJ4.deleteCore ###deleteCore
{code}

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5473) Make one state.json per collection

2014-03-31 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955186#comment-13955186
 ] 

Shalin Shekhar Mangar edited comment on SOLR-5473 at 3/31/14 1:55 PM:
--

All solr core tests pass with this patch. After discussing offline with Noble, 
I introduced a new method ClusterState.getCachedReplica which is exactly like 
getReplica except that it will fetch information available in locally cached 
data and never hit ZK. The older getReplica and this new getCachedReplica 
method are used only by SolrLogLayout and SolrLogFormatter classes so these 
should never hit ZK anyway.

However, there is a SolrJ test failure in CloudSolrServerTest on asserts added 
by SOLR-5715

{code}
  [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=CloudSolrServerTest 
-Dtests.method=testDistribSearch -Dtests.seed=5FAC2B1757C387B3 
-Dtests.slow=true -Dtests.locale=sv_SE -Dtests.timezone=Pacific/Samoa 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 22.1s J1 | CloudSolrServerTest.testDistribSearch 
   [junit4] Throwable #1: java.lang.AssertionError: Unexpected number of 
requests to expected URLs expected:6 but was:0
   [junit4]at 
__randomizedtesting.SeedInfo.seed([5FAC2B1757C387B3:DE4AA50F209CE78F]:0)
   [junit4]at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.doTest(CloudSolrServerTest.java:300)
   [junit4]at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
   [junit4]at java.lang.Thread.run(Thread.java:744)
   [junit4]   2 26918 T10 oas.SolrTestCaseJ4.deleteCore ###deleteCore
{code}


was (Author: shalinmangar):
All solr core tests pass with this patch. However, there is a SolrJ test 
failure in CloudSolrServerTest on asserts added by SOLR-5715

{code}
  [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=CloudSolrServerTest 
-Dtests.method=testDistribSearch -Dtests.seed=5FAC2B1757C387B3 
-Dtests.slow=true -Dtests.locale=sv_SE -Dtests.timezone=Pacific/Samoa 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 22.1s J1 | CloudSolrServerTest.testDistribSearch 
   [junit4] Throwable #1: java.lang.AssertionError: Unexpected number of 
requests to expected URLs expected:6 but was:0
   [junit4]at 
__randomizedtesting.SeedInfo.seed([5FAC2B1757C387B3:DE4AA50F209CE78F]:0)
   [junit4]at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.doTest(CloudSolrServerTest.java:300)
   [junit4]at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
   [junit4]at java.lang.Thread.run(Thread.java:744)
   [junit4]   2 26918 T10 oas.SolrTestCaseJ4.deleteCore ###deleteCore
{code}

 Make one state.json per collection
 --

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5562) LuceneSuggester does not work on Android

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955184#comment-13955184
 ] 

Uwe Schindler edited comment on LUCENE-5562 at 3/31/14 2:00 PM:


This is not a bug in Lucene. The Java Closeable interface is idempotent, as its 
documentation states in 
[http://docs.oracle.com/javase/7/docs/api/java/io/Closeable.html#close()]:

bq. Closes this stream and releases any system resources associated with it. If 
the stream is already closed then invoking this method has no effect.

If the implementation on Android does not implement this correctly, it is not a 
problem of Lucene.

Just to check if there is no other problem: Can you post the exact stack trace 
of the Exception on Android?

P.S.: Please note: Android is not Java compatible, so Lucene does not gurantee 
that it work correctly with Android. We also don't test on Android. Lucene 4.8 
will require Java 7, so it is unlikely that it will work on Android anymore.


was (Author: thetaphi):
This is not a bug in Lucene. The Java Closeable interface states in 
[http://docs.oracle.com/javase/7/docs/api/java/io/Closeable.html#close()]:

bq. Closes this stream and releases any system resources associated with it. If 
the stream is already closed then invoking this method has no effect.

If the implementation on Android does not implement this correctly, it is not a 
problem of Lucene.

 LuceneSuggester does not work on Android
 

 Key: LUCENE-5562
 URL: https://issues.apache.org/jira/browse/LUCENE-5562
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
 Environment: Android 4.4.2
Reporter: Giovanni Cuccu
Priority: Minor
 Attachments: AnalyzingSuggester.java, Sort.java


 I'm developing an application on android and I'm using lucene for indexing 
 and searching. When I try to use AnalyzingSuggester (even the Fuzzy version) 
 I got an Exception the BufferedOutputStream is already closed.
 I tracked the problem and it seems that in
 org.apache.lucene.search.suggest.Sort
 and in org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester
 the outputstream is closed twice hence the exception on android. 
 The same code on windows runs without a problem.
 It seems that the Android jvm does some additional checks. I attach two 
 patche files, the classes close the output stream once. (check for 
 writerClosed in the code to see what I did)  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5562) LuceneSuggester does not work on Android

2014-03-31 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5562.
---

Resolution: Not a Problem
  Assignee: Uwe Schindler

 LuceneSuggester does not work on Android
 

 Key: LUCENE-5562
 URL: https://issues.apache.org/jira/browse/LUCENE-5562
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.7
 Environment: Android 4.4.2
Reporter: Giovanni Cuccu
Assignee: Uwe Schindler
Priority: Minor
 Attachments: AnalyzingSuggester.java, Sort.java


 I'm developing an application on android and I'm using lucene for indexing 
 and searching. When I try to use AnalyzingSuggester (even the Fuzzy version) 
 I got an Exception the BufferedOutputStream is already closed.
 I tracked the problem and it seems that in
 org.apache.lucene.search.suggest.Sort
 and in org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester
 the outputstream is closed twice hence the exception on android. 
 The same code on windows runs without a problem.
 It seems that the Android jvm does some additional checks. I attach two 
 patche files, the classes close the output stream once. (check for 
 writerClosed in the code to see what I did)  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Ahmet Arslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Arslan updated LUCENE-5560:
-

Attachment: LUCENE-5560.patch

bq. Look at the patch. I refactored many thing of Solr, too. 
Sorry I looked at https://svn.apache.org/r1583315 and it conceals solr changes 
at first glance.

URLDecoder does not accept Charset instances either.

It is okey to use {code} URLDecoder.decode(, 
StandardCharsets.UTF_8.name()); {code} in such cases?

Is this patch usable in that sense?

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
 LUCENE-5560.patch, LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5859) Harden the Overseer restart mechanism

2014-03-31 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5859:
-

Attachment: SOLR-5859.patch

Added a couple of tests. 
Check if the system is shutting down , if not ,rejoin election 

Actually the logging was added for debugging , I removed all those extra logging

 Harden the Overseer restart mechanism
 -

 Key: SOLR-5859
 URL: https://issues.apache.org/jira/browse/SOLR-5859
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5859.patch, SOLR-5859.patch, SOLR-5859.patch


 SOLR-5476 depends on Overseer restart.The current strategy is to remove the 
 zk node for leader election and wait for STATUS_UPDATE_DELAY +100 ms and  
 start the new overseer.
 Though overseer ops are short running,  it is not a 100% foolproof strategy 
 because if an operation takes longer than the wait period there can be race 
 condition. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5939) Wrong request potentially on Error from StreamingSolrServer

2014-03-31 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5939:
--

Fix Version/s: 5.0
   4.8

 Wrong request potentially on Error from StreamingSolrServer
 ---

 Key: SOLR-5939
 URL: https://issues.apache.org/jira/browse/SOLR-5939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Per Steffensen
  Labels: error, retry
 Fix For: 4.8, 5.0

 Attachments: SOLR-5939_demo_problem.patch


 In _StreamingSolrServer.getSolrServer|ConcurrentUpdateSolrServer.handleError_ 
 the _SolrCmdDistributor.Req req_ parameter is used for the _req_ field of all 
 _error_'s created. This is also true for subsequent requests sent through the 
 retuned ConcurrentUpdateSolrServer. This means a.o. that wrong request (the 
 first request sent through this _ConcurrentUpdateSolrServer_) may be retried 
 if case of errors executing one of the subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5939) Wrong request potentially on Error from StreamingSolrServer

2014-03-31 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955212#comment-13955212
 ] 

Mark Miller commented on SOLR-5939:
---

This sounds so familiar - did you bring this up before in another issue?

 Wrong request potentially on Error from StreamingSolrServer
 ---

 Key: SOLR-5939
 URL: https://issues.apache.org/jira/browse/SOLR-5939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Per Steffensen
  Labels: error, retry
 Fix For: 4.8, 5.0

 Attachments: SOLR-5939_demo_problem.patch


 In _StreamingSolrServer.getSolrServer|ConcurrentUpdateSolrServer.handleError_ 
 the _SolrCmdDistributor.Req req_ parameter is used for the _req_ field of all 
 _error_'s created. This is also true for subsequent requests sent through the 
 retuned ConcurrentUpdateSolrServer. This means a.o. that wrong request (the 
 first request sent through this _ConcurrentUpdateSolrServer_) may be retried 
 if case of errors executing one of the subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-03-31 Thread Steven Bower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955223#comment-13955223
 ] 

Steven Bower commented on SOLR-5488:


Reviewed.. looks good

 Fix up test failures for Analytics Component
 

 Key: SOLR-5488
 URL: https://issues.apache.org/jira/browse/SOLR-5488
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, eoe.errors


 The analytics component has a few test failures, perhaps 
 environment-dependent. This is just to collect the test fixes in one place 
 for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-java7-64-analyzers - Build # 1228 - Failure!

2014-03-31 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-java7-64-analyzers/1228/

1 tests failed.
REGRESSION:  org.apache.lucene.analysis.core.TestRandomChains.testRandomChains

Error Message:
some thread(s) failed

Stack Trace:
java.lang.RuntimeException: some thread(s) failed
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:533)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:901)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1617)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:826)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:862)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:876)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:783)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:443)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:835)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:782)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:359)
at java.lang.Thread.run(Thread.java:724)




Build Log:
[...truncated 1115 lines...]
   [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2 TEST FAIL: useCharFilter=true text='ura cobarde e amb\u00edgu'
   [junit4]   2 mar 31, 2014 11:07:06 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2 WARNING: Uncaught exception in thread: 
Thread[Thread-133,5,TGRP-TestRandomChains]
   [junit4]   2 java.lang.OutOfMemoryError: Java heap space
   [junit4]   2at 
__randomizedtesting.SeedInfo.seed([AE9BD047EDEF]:0)
   [junit4]   2at java.util.Arrays.copyOfRange(Arrays.java:2694)
   [junit4]   2at java.lang.String.init(String.java:203)
   [junit4]   2

[jira] [Updated] (SOLR-5938) ConcurrentUpdateSolrServer don't parser the response while response status code isn't 200

2014-03-31 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5938:
--

Fix Version/s: 5.0
   4.8

 ConcurrentUpdateSolrServer don't parser the response while response status 
 code isn't 200
 -

 Key: SOLR-5938
 URL: https://issues.apache.org/jira/browse/SOLR-5938
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
 Environment: one cloud has two server, one shard, one leader one 
 replica, send the index into replica server, replica server forward leader 
 server.
Reporter: Raintung Li
  Labels: solrj
 Fix For: 4.8, 5.0

 Attachments: SOLR-5938.txt


 ConcurrentUpdateSolrServer only give back the error that don't parser the 
 response body, you can't get the error reason from remote server. 
 EX.
 You send the index request to one solr server, this server forward the other 
 leader server. forward case invoke the ConcurrentUpdateSolrServer.java, you 
 can't get the right error message only check it in the leader server if 
 happen error. Actually leader server had sent the error message to the 
 forwarding server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5908) Make REQUESTSTATUS call non-blocking and non-blocked

2014-03-31 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955246#comment-13955246
 ] 

Mark Miller commented on SOLR-5908:
---

That makes sense to me - doesn't seem any strong reason to send a status 
request to the OverseerCollectionProcessor.

 Make REQUESTSTATUS call non-blocking and non-blocked
 

 Key: SOLR-5908
 URL: https://issues.apache.org/jira/browse/SOLR-5908
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-5908.patch


 Currently REQUESTSTATUS Collection API call is blocked by any other call in 
 the OCP work queue.
 Make it independent and non-blocked/non-blocking.
 This would be handled as a part of having the OCP multi-threaded but I'm 
 opening this issue to explore other possible options of handling this.
 If the final fix happens via SOLR-5681, will resolve it when SOLR-5681 gets 
 resolved.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5560) Cleanup charset handling for Java 7

2014-03-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955255#comment-13955255
 ] 

Uwe Schindler commented on LUCENE-5560:
---

Use Lucene's {{IOUtils.UTF_8}} for that use-case (see javadocs). This one uses 
the above method and provides the shortcut constant as {{String}}. The commit 
does this partially. Although I did not rewrite all instances of the UTF-8 
string, there are still many of them in tests (which does not hurt).

This also applies to commons-io stuff. But we should nuke commons-io in later 
issues! commons-io is mostly useless with later Java versions. And it has 
partially unmaintained, horrible methods which violate lots of standards 
(auto-closing, default charsets,...).

 Cleanup charset handling for Java 7
 ---

 Key: LUCENE-5560
 URL: https://issues.apache.org/jira/browse/LUCENE-5560
 Project: Lucene - Core
  Issue Type: Task
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 4.8, 5.0

 Attachments: LUCENE-5560.patch, LUCENE-5560.patch, LUCENE-5560.patch, 
 LUCENE-5560.patch, LUCENE-5560.patch


 As we are now on Java 7, we should cleanup our charset handling to use the 
 official constants added by Java 7: {{StandardCharsets}}
 This issue is just a small code refactoring, trying to nuke the IOUtils 
 constants and replace them with the official ones provided by Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-03-31 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955264#comment-13955264
 ] 

Houston Putman commented on SOLR-5488:
--

The changes look good to me. Thanks for the fixes [~vzhovtiuk]. [~sbower] does 
the performance look the same? Just curious since we have switched maps and are 
sorting more.

 Fix up test failures for Analytics Component
 

 Key: SOLR-5488
 URL: https://issues.apache.org/jira/browse/SOLR-5488
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, eoe.errors


 The analytics component has a few test failures, perhaps 
 environment-dependent. This is just to collect the test fixes in one place 
 for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-03-31 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955281#comment-13955281
 ] 

Yonik Seeley commented on SOLR-5488:


bq. If there are no objections, I'll commit this early next week to trunk, and 
if nothing pops out in a few days merge it into 4x.

Remember, this was only committed to trunk not because of test failures (which 
we didn't know about when it was committed to trunk), but to give time to 
solidify the API (which is much harder to change once it's released).  After 
a quick look, there's probably more to do here.  The biggest thing that popped 
out at me was the structure of the response - NamedList in some places that 
should probably be SimpleOrderedMap.  Add wt=jsonindent=true to some sample 
requests and it's much easier to see.

 Fix up test failures for Analytics Component
 

 Key: SOLR-5488
 URL: https://issues.apache.org/jira/browse/SOLR-5488
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, eoe.errors


 The analytics component has a few test failures, perhaps 
 environment-dependent. This is just to collect the test fixes in one place 
 for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-03-31 Thread Steven Bower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955283#comment-13955283
 ] 

Steven Bower commented on SOLR-5488:


I've not perf tested since but the sorts seem to over things that are very 
short (lists of requests/etc..) so I doubt there will be much of a change

Also I moved the call to getTopFilter() out of loop over requests so this might 
actually make things a bit faster when there is a large number of requests

 Fix up test failures for Analytics Component
 

 Key: SOLR-5488
 URL: https://issues.apache.org/jira/browse/SOLR-5488
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, eoe.errors


 The analytics component has a few test failures, perhaps 
 environment-dependent. This is just to collect the test fixes in one place 
 for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Lucene / Solr 4.7.1 RC2

2014-03-31 Thread david.w.smi...@gmail.com
+1

SUCCESS! [1:51:37.952160]


On Sat, Mar 29, 2014 at 4:46 AM, Steve Rowe sar...@gmail.com wrote:

 Please vote for the second Release Candidate for Lucene/Solr 4.7.1.

 Download it here:
 
 https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/
 

 Smoke tester cmdline (from the lucene_solr_4_7 branch):

 python3.2 -u dev-tools/scripts/smokeTestRelease.py \

 https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/\
 1582953 4.7.1 /tmp/4.7.1-smoke

 The smoke tester passed for me: SUCCESS! [0:50:29.936732]

 My vote: +1

 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Updated] (LUCENE-5052) bitset codec for off heap filters

2014-03-31 Thread Dr Oleg Savrasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated LUCENE-5052:
-

Attachment: LUCENE-5052-1.patch

Only DOCS_ONLY index option is supported. IllegalArgumentException is thrown 
otherwise.

 bitset codec for off heap filters
 -

 Key: LUCENE-5052
 URL: https://issues.apache.org/jira/browse/LUCENE-5052
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Mikhail Khludnev
  Labels: features
 Fix For: 5.0

 Attachments: LUCENE-5052-1.patch, LUCENE-5052.patch, bitsetcodec.zip, 
 bitsetcodec.zip


 Colleagues,
 When we filter we don’t care any of scoring factors i.e. norms, positions, 
 tf, but it should be fast. The obvious way to handle this is to decode 
 postings list and cache it in heap (CachingWrappingFilter, Solr’s DocSet). 
 Both of consuming a heap and decoding as well are expensive. 
 Let’s write a posting list as a bitset, if df is greater than segment's 
 maxdocs/8  (what about skiplists? and overall performance?). 
 Beside of the codec implementation, the trickiest part to me is to design API 
 for this. How we can let the app know that a term query don’t need to be 
 cached in heap, but can be held as an mmaped bitset?
 WDYT?  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5052) bitset codec for off heap filters

2014-03-31 Thread Dr Oleg Savrasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated LUCENE-5052:
-

Attachment: (was: LUCENE-5052-1.patch)

 bitset codec for off heap filters
 -

 Key: LUCENE-5052
 URL: https://issues.apache.org/jira/browse/LUCENE-5052
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Mikhail Khludnev
  Labels: features
 Fix For: 5.0

 Attachments: LUCENE-5052.patch, bitsetcodec.zip, bitsetcodec.zip


 Colleagues,
 When we filter we don’t care any of scoring factors i.e. norms, positions, 
 tf, but it should be fast. The obvious way to handle this is to decode 
 postings list and cache it in heap (CachingWrappingFilter, Solr’s DocSet). 
 Both of consuming a heap and decoding as well are expensive. 
 Let’s write a posting list as a bitset, if df is greater than segment's 
 maxdocs/8  (what about skiplists? and overall performance?). 
 Beside of the codec implementation, the trickiest part to me is to design API 
 for this. How we can let the app know that a term query don’t need to be 
 cached in heap, but can be held as an mmaped bitset?
 WDYT?  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (LUCENE-5052) bitset codec for off heap filters

2014-03-31 Thread Dr Oleg Savrasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated LUCENE-5052:
-

Comment: was deleted

(was: Only DOCS_ONLY index option is supported. IllegalArgumentException is 
thrown otherwise.)

 bitset codec for off heap filters
 -

 Key: LUCENE-5052
 URL: https://issues.apache.org/jira/browse/LUCENE-5052
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Mikhail Khludnev
  Labels: features
 Fix For: 5.0

 Attachments: LUCENE-5052.patch, bitsetcodec.zip, bitsetcodec.zip


 Colleagues,
 When we filter we don’t care any of scoring factors i.e. norms, positions, 
 tf, but it should be fast. The obvious way to handle this is to decode 
 postings list and cache it in heap (CachingWrappingFilter, Solr’s DocSet). 
 Both of consuming a heap and decoding as well are expensive. 
 Let’s write a posting list as a bitset, if df is greater than segment's 
 maxdocs/8  (what about skiplists? and overall performance?). 
 Beside of the codec implementation, the trickiest part to me is to design API 
 for this. How we can let the app know that a term query don’t need to be 
 cached in heap, but can be held as an mmaped bitset?
 WDYT?  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5052) bitset codec for off heap filters

2014-03-31 Thread Dr Oleg Savrasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated LUCENE-5052:
-

Attachment: LUCENE-5052-1.patch

Only DOCS_ONLY index option is supported. IllegalArgumentException is thrown 
for anything else.

 bitset codec for off heap filters
 -

 Key: LUCENE-5052
 URL: https://issues.apache.org/jira/browse/LUCENE-5052
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Mikhail Khludnev
  Labels: features
 Fix For: 5.0

 Attachments: LUCENE-5052-1.patch, LUCENE-5052.patch, bitsetcodec.zip, 
 bitsetcodec.zip


 Colleagues,
 When we filter we don’t care any of scoring factors i.e. norms, positions, 
 tf, but it should be fast. The obvious way to handle this is to decode 
 postings list and cache it in heap (CachingWrappingFilter, Solr’s DocSet). 
 Both of consuming a heap and decoding as well are expensive. 
 Let’s write a posting list as a bitset, if df is greater than segment's 
 maxdocs/8  (what about skiplists? and overall performance?). 
 Beside of the codec implementation, the trickiest part to me is to design API 
 for this. How we can let the app know that a term query don’t need to be 
 cached in heap, but can be held as an mmaped bitset?
 WDYT?  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 81070 - Failure!

2014-03-31 Thread Michael McCandless
Doesn't repro for me but I think it's related to LUCENE-5544; it's
happening in the very last part of IW.rollbackInternal:

try {
  processEvents(false, true);
} finally {
  notifyAll();
}

Down in DocumentsWriterFlushQueue, the assert is angry that we are
sync'd on the IW instance.

Is it even necessary to process events after rollback has finished?
What could the events even do (the IW is closed)...

Mike McCandless

http://blog.mikemccandless.com


On Mon, Mar 31, 2014 at 5:16 AM,  buil...@flonkings.com wrote:
 Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/81070/

 1 tests failed.
 REGRESSION:  
 org.apache.lucene.index.TestIndexWriterWithThreads.testRollbackAndCommitWithThreads

 Error Message:
 Captured an uncaught exception in thread: Thread[id=164, name=Thread-98, 
 state=RUNNABLE, group=TGRP-TestIndexWriterWithThreads]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=164, name=Thread-98, state=RUNNABLE, 
 group=TGRP-TestIndexWriterWithThreads]
 Caused by: java.lang.RuntimeException: java.lang.AssertionError
 at __randomizedtesting.SeedInfo.seed([A2CAC9704F740906]:0)
 at 
 org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:620)
 Caused by: java.lang.AssertionError
 at 
 org.apache.lucene.index.DocumentsWriterFlushQueue.forcePurge(DocumentsWriterFlushQueue.java:135)
 at 
 org.apache.lucene.index.DocumentsWriter.purgeBuffer(DocumentsWriter.java:196)
 at org.apache.lucene.index.IndexWriter.purge(IndexWriter.java:4649)
 at 
 org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4662)
 at 
 org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:699)
 at 
 org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4688)
 at 
 org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4680)
 at 
 org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2184)
 at org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:2085)
 at 
 org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:576)




 Build Log:
 [...truncated 690 lines...]
[junit4] Suite: org.apache.lucene.index.TestIndexWriterWithThreads
[junit4]   2 mar 31, 2014 8:14:01 PM 
 com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
  uncaughtException
[junit4]   2 WARNING: Uncaught exception in thread: 
 Thread[Thread-98,5,TGRP-TestIndexWriterWithThreads]
[junit4]   2 java.lang.RuntimeException: java.lang.AssertionError
[junit4]   2at 
 __randomizedtesting.SeedInfo.seed([A2CAC9704F740906]:0)
[junit4]   2at 
 org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:620)
[junit4]   2 Caused by: java.lang.AssertionError
[junit4]   2at 
 org.apache.lucene.index.DocumentsWriterFlushQueue.forcePurge(DocumentsWriterFlushQueue.java:135)
[junit4]   2at 
 org.apache.lucene.index.DocumentsWriter.purgeBuffer(DocumentsWriter.java:196)
[junit4]   2at 
 org.apache.lucene.index.IndexWriter.purge(IndexWriter.java:4649)
[junit4]   2at 
 org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4662)
[junit4]   2at 
 org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:699)
[junit4]   2at 
 org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4688)
[junit4]   2at 
 org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4680)
[junit4]   2at 
 org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2184)
[junit4]   2at 
 org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:2085)
[junit4]   2at 
 org.apache.lucene.index.TestIndexWriterWithThreads$1.run(TestIndexWriterWithThreads.java:576)
[junit4]   2
[junit4]   2 NOTE: reproduce with: ant test  
 -Dtestcase=TestIndexWriterWithThreads 
 -Dtests.method=testRollbackAndCommitWithThreads -Dtests.seed=A2CAC9704F740906 
 -Dtests.slow=true -Dtests.locale=da -Dtests.timezone=Australia/LHI 
 -Dtests.file.encoding=UTF-8
[junit4] ERROR   0.94s J2 | 
 TestIndexWriterWithThreads.testRollbackAndCommitWithThreads 
[junit4] Throwable #1: java.lang.AssertionError
[junit4]at 
 org.apache.lucene.index.TestIndexWriterWithThreads.testRollbackAndCommitWithThreads(TestIndexWriterWithThreads.java:632)
[junit4]at java.lang.Thread.run(Thread.java:724)Throwable #2: 
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=164, name=Thread-98, state=RUNNABLE, 
 group=TGRP-TestIndexWriterWithThreads]
[junit4] Caused by: 

[jira] [Commented] (SOLR-5934) LBHttpSolrServer exception handling improvement and small test improvements

2014-03-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955349#comment-13955349
 ] 

ASF subversion and git services commented on SOLR-5934:
---

Commit 1583369 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1583369 ]

SOLR-5934: Commit again to 4x - different JIRA caused the failes - 
LBHttpSolrServer exception handling improvement and small test improvements.

 LBHttpSolrServer exception handling improvement and small test improvements
 ---

 Key: SOLR-5934
 URL: https://issues.apache.org/jira/browse/SOLR-5934
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.8, 5.0
Reporter: Gregory Chanan
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5934.patch


 The error handling in LBHttpSolrServer can be simplified -- right now almost 
 identical code is run whether the server is a zombie or not, which sometimes 
 doesn't make complete sense.  For example, the zombie code goes through some 
 effort to throw an exception or save the exception based on the type of 
 exception, but the end result is the same -- an exception is thrown.  It's 
 simpler if the same code is run each time.
 Also, made some minor changes to test cases:
 - made sure SolrServer.shutdown is called in finally, so it happens even if a 
 request throws an exception
 - got rid of some unnecessary checks
 - normalized some functions/variables so the functions are public scope and 
 the variables aren't



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5939) Wrong request potentially on Error from StreamingSolrServer

2014-03-31 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955356#comment-13955356
 ] 

Per Steffensen commented on SOLR-5939:
--

Well, I happened to be reading the code trying to understand how it works, 
during my work with SOLR-4470. Instead of fixing it (I had to concentrate on 
SOLR-4470 stuff) I just added a few FIXME lines around where the problem is. 
[~janhoy] is handling the SOLR-4470 patch now, and prefer that I open an issue 
(this SOLR-5939) about the problem and just reference that from the code 
instead of the FIXME description. So yes, I mentioned it before, in comments to 
SOLR-4470.

I mentioned another issue with SolrCmdDistributor long time ago (see 
SOLR-3428). I do not know exactly what happened to that one. We have the fix in 
our version of Solr, but I am not sure what you did about it.

 Wrong request potentially on Error from StreamingSolrServer
 ---

 Key: SOLR-5939
 URL: https://issues.apache.org/jira/browse/SOLR-5939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Per Steffensen
  Labels: error, retry
 Fix For: 4.8, 5.0

 Attachments: SOLR-5939_demo_problem.patch


 In _StreamingSolrServer.getSolrServer|ConcurrentUpdateSolrServer.handleError_ 
 the _SolrCmdDistributor.Req req_ parameter is used for the _req_ field of all 
 _error_'s created. This is also true for subsequent requests sent through the 
 retuned ConcurrentUpdateSolrServer. This means a.o. that wrong request (the 
 first request sent through this _ConcurrentUpdateSolrServer_) may be retried 
 if case of errors executing one of the subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5939) Wrong request potentially on Error from StreamingSolrServer

2014-03-31 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955356#comment-13955356
 ] 

Per Steffensen edited comment on SOLR-5939 at 3/31/14 4:32 PM:
---

Well, I happened to be reading the code trying to understand how it works, 
during my work with SOLR-4470. I realized that this piece of code would not 
work - by code inspection alone. Instead of fixing it (I had to concentrate on 
SOLR-4470 stuff) I just added a few FIXME lines around where the problem is. 
[~janhoy] is handling the SOLR-4470 patch now, and prefer that I open an issue 
(this SOLR-5939) about the problem and just reference that from the code 
instead of the FIXME description. So yes, I mentioned it before, in comments to 
SOLR-4470.

I mentioned another issue with SolrCmdDistributor long time ago (see 
SOLR-3428). I do not know exactly what happened to that one. We have the fix in 
our version of Solr, but I am not sure what you did about it.


was (Author: steff1193):
Well, I happened to be reading the code trying to understand how it works, 
during my work with SOLR-4470. Instead of fixing it (I had to concentrate on 
SOLR-4470 stuff) I just added a few FIXME lines around where the problem is. 
[~janhoy] is handling the SOLR-4470 patch now, and prefer that I open an issue 
(this SOLR-5939) about the problem and just reference that from the code 
instead of the FIXME description. So yes, I mentioned it before, in comments to 
SOLR-4470.

I mentioned another issue with SolrCmdDistributor long time ago (see 
SOLR-3428). I do not know exactly what happened to that one. We have the fix in 
our version of Solr, but I am not sure what you did about it.

 Wrong request potentially on Error from StreamingSolrServer
 ---

 Key: SOLR-5939
 URL: https://issues.apache.org/jira/browse/SOLR-5939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Per Steffensen
  Labels: error, retry
 Fix For: 4.8, 5.0

 Attachments: SOLR-5939_demo_problem.patch


 In _StreamingSolrServer.getSolrServer|ConcurrentUpdateSolrServer.handleError_ 
 the _SolrCmdDistributor.Req req_ parameter is used for the _req_ field of all 
 _error_'s created. This is also true for subsequent requests sent through the 
 retuned ConcurrentUpdateSolrServer. This means a.o. that wrong request (the 
 first request sent through this _ConcurrentUpdateSolrServer_) may be retried 
 if case of errors executing one of the subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5939) Wrong request potentially on Error from StreamingSolrServer

2014-03-31 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955360#comment-13955360
 ] 

Mark Miller commented on SOLR-5939:
---

Just wanted to make sure I wasn't having deja vu - it def deserves it's own 
issue.

 Wrong request potentially on Error from StreamingSolrServer
 ---

 Key: SOLR-5939
 URL: https://issues.apache.org/jira/browse/SOLR-5939
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Per Steffensen
  Labels: error, retry
 Fix For: 4.8, 5.0

 Attachments: SOLR-5939_demo_problem.patch


 In _StreamingSolrServer.getSolrServer|ConcurrentUpdateSolrServer.handleError_ 
 the _SolrCmdDistributor.Req req_ parameter is used for the _req_ field of all 
 _error_'s created. This is also true for subsequent requests sent through the 
 retuned ConcurrentUpdateSolrServer. This means a.o. that wrong request (the 
 first request sent through this _ConcurrentUpdateSolrServer_) may be retried 
 if case of errors executing one of the subsequent requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5908) Make REQUESTSTATUS call non-blocking and non-blocked

2014-03-31 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955372#comment-13955372
 ] 

Shalin Shekhar Mangar commented on SOLR-5908:
-

+1

Looks good to me!

 Make REQUESTSTATUS call non-blocking and non-blocked
 

 Key: SOLR-5908
 URL: https://issues.apache.org/jira/browse/SOLR-5908
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Anshum Gupta
Assignee: Anshum Gupta
 Attachments: SOLR-5908.patch


 Currently REQUESTSTATUS Collection API call is blocked by any other call in 
 the OCP work queue.
 Make it independent and non-blocked/non-blocking.
 This would be handled as a part of having the OCP multi-threaded but I'm 
 opening this issue to explore other possible options of handling this.
 If the final fix happens via SOLR-5681, will resolve it when SOLR-5681 gets 
 resolved.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5859) Harden the Overseer restart mechanism

2014-03-31 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5859:
-

Attachment: SOLR-5859.patch

 Harden the Overseer restart mechanism
 -

 Key: SOLR-5859
 URL: https://issues.apache.org/jira/browse/SOLR-5859
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5859.patch, SOLR-5859.patch, SOLR-5859.patch


 SOLR-5476 depends on Overseer restart.The current strategy is to remove the 
 zk node for leader election and wait for STATUS_UPDATE_DELAY +100 ms and  
 start the new overseer.
 Though overseer ops are short running,  it is not a 100% foolproof strategy 
 because if an operation takes longer than the wait period there can be race 
 condition. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5859) Harden the Overseer restart mechanism

2014-03-31 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5859:
-

Attachment: (was: SOLR-5859.patch)

 Harden the Overseer restart mechanism
 -

 Key: SOLR-5859
 URL: https://issues.apache.org/jira/browse/SOLR-5859
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-5859.patch, SOLR-5859.patch, SOLR-5859.patch


 SOLR-5476 depends on Overseer restart.The current strategy is to remove the 
 zk node for leader election and wait for STATUS_UPDATE_DELAY +100 ms and  
 start the new overseer.
 Though overseer ops are short running,  it is not a 100% foolproof strategy 
 because if an operation takes longer than the wait period there can be race 
 condition. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5936) Deprecate non-Trie-based numeric (and date) field types in 4.x and remove them from 5.0

2014-03-31 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955381#comment-13955381
 ] 

Hoss Man commented on SOLR-5936:


bq. +1 to rename for 5.0

What exactly do you suggest renaming these Solr FieldType's to?

If you are suggesting TrieFooField - FooField then i am a _*HUGE*_ -1 to 
that idea.

It's one thing to say that things like the (text based) IntField is deprecated, 
and will not work in 5.0 and people have to reindex.  but if we _also_ rename 
TrieIntField to IntField, then people who are still using the (text based) 
IntField in their schema.xml and attempt upgrading will get really weird, and 
hard to understand errors.

If folks think Trie is a confusing word in the name and want to change that 
then fine -- I'm certainly open to the idea --  But we really should not re-use 
the name of an existing (deprecated/removed) field type in a way that isn't 
backcompat.



In any event, a lot of what's being discussed here in comments feels like it 
should really be tracked in discreet issues (these can all be dealt with 
independnet of this issue, and eachother):

* better jdocs for the trie numeric fields
* renaming the trie numeric fields
* simplifying configuration of the trie numeric fields

...let's please keep this issue focused on the deprecation  removal of the 
non-trie fields, and folks who care about these other idea can file other 
jira's to track them

 Deprecate non-Trie-based numeric (and date) field types in 4.x and remove 
 them from 5.0
 ---

 Key: SOLR-5936
 URL: https://issues.apache.org/jira/browse/SOLR-5936
 Project: Solr
  Issue Type: Task
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: SOLR-5936.branch_4x.patch, SOLR-5936.branch_4x.patch, 
 SOLR-5936.branch_4x.patch


 We've been discouraging people from using non-Trie numericdate field types 
 for years, it's time we made it official.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5488) Fix up test failures for Analytics Component

2014-03-31 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955382#comment-13955382
 ] 

Erick Erickson commented on SOLR-5488:
--

OK, I'll probably commit this to trunk tonight and hold off on merging into 4.x 
for a bit to address the interface questions. I want to get some assurance that 
the test errors are gone in all environments.

I'd _really_ like to get them addressed and be able to merge in the near future 
though, but that'll be another JIRA I'd expect.

 Fix up test failures for Analytics Component
 

 Key: SOLR-5488
 URL: https://issues.apache.org/jira/browse/SOLR-5488
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7, 5.0
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, SOLR-5488.patch, 
 SOLR-5488.patch, SOLR-5488.patch, eoe.errors


 The analytics component has a few test failures, perhaps 
 environment-dependent. This is just to collect the test fixes in one place 
 for convenience when we merge back into 4.x



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-31 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955386#comment-13955386
 ] 

Shalin Shekhar Mangar commented on SOLR-5931:
-

Yeah, this makes sense. The properties should be reloaded upon core reload. The 
thing is that I can't find how these properties make their way into DIH. I'll 
have to setup an example and step through with a debugger. I don't think I'll 
find the time this week.

 solrcore.properties is not reloaded when core is reloaded
 -

 Key: SOLR-5931
 URL: https://issues.apache.org/jira/browse/SOLR-5931
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.7
Reporter: Gunnlaugur Thor Briem
Assignee: Shalin Shekhar Mangar
Priority: Minor

 When I change solrcore.properties for a core, and then reload the core, the 
 previous values of the properties in that file are still in effect. If I 
 *unload* the core and then add it back, in the “Core Admin” section of the 
 admin UI, then the changes in solrcore.properties do take effect.
 My specific test case is a DataImportHandler where {{db-data-config.xml}} 
 uses a property to decide which DB host to talk to:
 {code:xml}
 dataSource driver=org.postgresql.Driver name=meta 
 url=jdbc:postgresql://${dbhost}/${solr.core.name} .../
 {code}
 When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
 the core, the next dataimport operation still connects to the previous DB 
 host. Reloading the dataimport config does not help. I have to unload the 
 core (or fully restart the whole Solr) for the properties change to take 
 effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5931) solrcore.properties is not reloaded when core is reloaded

2014-03-31 Thread Gary Yue (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955475#comment-13955475
 ] 

Gary Yue commented on SOLR-5931:


Is there a good workaround in the meantime?
We need a quick way to switch master URL on individual the slaves in case of 
site issues. Updating solrconfig.xml directly for particular slave doesn't work 
well because the change will get overwritten on each replication (unless we 
change this in master's solrconfig as well)

Also, looks like i can't even call CREATE with the new property b/c starting 
in solr 4.3+, it will return throw an error and ask you to call RELOAD 
instead. (where as in solr3.x this i s essentially doing a RELOAD with new 
properties) 

thx!

 solrcore.properties is not reloaded when core is reloaded
 -

 Key: SOLR-5931
 URL: https://issues.apache.org/jira/browse/SOLR-5931
 Project: Solr
  Issue Type: Bug
  Components: multicore
Affects Versions: 4.7
Reporter: Gunnlaugur Thor Briem
Assignee: Shalin Shekhar Mangar
Priority: Minor

 When I change solrcore.properties for a core, and then reload the core, the 
 previous values of the properties in that file are still in effect. If I 
 *unload* the core and then add it back, in the “Core Admin” section of the 
 admin UI, then the changes in solrcore.properties do take effect.
 My specific test case is a DataImportHandler where {{db-data-config.xml}} 
 uses a property to decide which DB host to talk to:
 {code:xml}
 dataSource driver=org.postgresql.Driver name=meta 
 url=jdbc:postgresql://${dbhost}/${solr.core.name} .../
 {code}
 When I change that {{dbhost}} property in {{solrcore.properties}} and reload 
 the core, the next dataimport operation still connects to the previous DB 
 host. Reloading the dataimport config does not help. I have to unload the 
 core (or fully restart the whole Solr) for the properties change to take 
 effect.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955494#comment-13955494
 ] 

Rafał Kuć commented on SOLR-5935:
-

Low indexing rate and high indexing - whenever the queries are present the 
cluster is finally going into a locked state. When locked it doesn't respond to 
any requests - any queries, indexing or for example loading admin pages.

 SolrCloud hangs under certain conditions
 

 Key: SOLR-5935
 URL: https://issues.apache.org/jira/browse/SOLR-5935
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Rafał Kuć
Priority: Critical
 Attachments: thread dumps.zip


 As discussed in a mailing list - let's try to find the reason while under 
 certain conditions SolrCloud can hang.
 I have an issue with one of the SolrCloud deployments. Six machines, a 
 collection with 6 shards with a replication factor of 3. It all runs on 6 
 physical servers, each with 24 cores. We've indexed about 32 million 
 documents and everything was fine until that point.
 Now, during performance tests, we run into an issue - SolrCloud hangs
 when querying and indexing is run at the same time. First we see a
 normal load on the machines, than the load starts to drop and thread
 dump shown numerous threads like this:
 {noformat}
 Thread 12624: (state = BLOCKED)
  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
 may be imprecise)
  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
 line=186 (Compiled frame)
  - 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
 @bci=42, line=2043 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
 line=131 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
 java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.get(long, 
 java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
  - 
 org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
 line=456 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
 line=906 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
  @bci=6, line=784 (Compiled frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
 (Interpreted frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
  @bci=17, line=199 (Compiled frame)
  - 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
  @bci=132, line=285 (Interpreted frame)
  - 
 org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
  java.util.List) @bci=13, line=214 (Compiled frame)
  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
 line=161 (Compiled frame)
  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
 line=118 (Interpreted frame)
  - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
 (Interpreted frame)
  - java.util.concurrent.FutureTask.run() @bci=4, line=166 (Compiled frame)
  - java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=471 
 (Interpreted 

Re: [VOTE] Lucene / Solr 4.7.1 RC2

2014-03-31 Thread Adrien Grand
+1
SUCCESS! [1:30:20.918150]

On Mon, Mar 31, 2014 at 5:40 PM, david.w.smi...@gmail.com
david.w.smi...@gmail.com wrote:
 +1

 SUCCESS! [1:51:37.952160]



 On Sat, Mar 29, 2014 at 4:46 AM, Steve Rowe sar...@gmail.com wrote:

 Please vote for the second Release Candidate for Lucene/Solr 4.7.1.

 Download it here:

 https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/

 Smoke tester cmdline (from the lucene_solr_4_7 branch):

 python3.2 -u dev-tools/scripts/smokeTestRelease.py \

 https://people.apache.org/~sarowe/staging_area/lucene-solr-4.7.1-RC2-rev1582953/
 \
 1582953 4.7.1 /tmp/4.7.1-smoke

 The smoke tester passed for me: SUCCESS! [0:50:29.936732]

 My vote: +1

 Steve
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org





-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5940) Make post.jar report back detailed error in case of 400 responses

2014-03-31 Thread Sameer Maggon (JIRA)
Sameer Maggon created SOLR-5940:
---

 Summary: Make post.jar report back detailed error in case of 400 
responses
 Key: SOLR-5940
 URL: https://issues.apache.org/jira/browse/SOLR-5940
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 4.7
Reporter: Sameer Maggon


Currently post.jar does not print detailed error message that is encountered 
during indexing. In certain use cases, it's helpful to see the error message so 
that clients can take appropriate actions.

In 4.7, here's what gets shown if there is an error during indexing:

SimplePostTool: WARNING: Solr returned an error #400 Bad Request
SimplePostTool: WARNING: IOException while reading response: 
java.io.IOException: Server returned HTTP response code: 400 for URL: 
http://localhost:8983/solr/update

It would be helpful to print out the msg that is returned from Solr.




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-31 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955533#comment-13955533
 ] 

Mark Miller commented on SOLR-5935:
---

I wonder if you are hitting a connection pool limit or something. Have you been 
able to grab any stack traces during the hang?

 SolrCloud hangs under certain conditions
 

 Key: SOLR-5935
 URL: https://issues.apache.org/jira/browse/SOLR-5935
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Rafał Kuć
Priority: Critical
 Attachments: thread dumps.zip


 As discussed in a mailing list - let's try to find the reason while under 
 certain conditions SolrCloud can hang.
 I have an issue with one of the SolrCloud deployments. Six machines, a 
 collection with 6 shards with a replication factor of 3. It all runs on 6 
 physical servers, each with 24 cores. We've indexed about 32 million 
 documents and everything was fine until that point.
 Now, during performance tests, we run into an issue - SolrCloud hangs
 when querying and indexing is run at the same time. First we see a
 normal load on the machines, than the load starts to drop and thread
 dump shown numerous threads like this:
 {noformat}
 Thread 12624: (state = BLOCKED)
  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
 may be imprecise)
  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
 line=186 (Compiled frame)
  - 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
 @bci=42, line=2043 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
 line=131 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
 java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.get(long, 
 java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
  - 
 org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
 line=456 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
 line=906 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
  @bci=6, line=784 (Compiled frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
 (Interpreted frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
  @bci=17, line=199 (Compiled frame)
  - 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
  @bci=132, line=285 (Interpreted frame)
  - 
 org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
  java.util.List) @bci=13, line=214 (Compiled frame)
  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
 line=161 (Compiled frame)
  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
 line=118 (Interpreted frame)
  - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
 (Interpreted frame)
  - java.util.concurrent.FutureTask.run() @bci=4, line=166 (Compiled frame)
  - java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=471 
 (Interpreted frame)
  - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
 (Interpreted frame)
  - 

[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955540#comment-13955540
 ] 

Rafał Kuć commented on SOLR-5935:
-

Mark - thread dumps are attached in the zip file, made with jstack. In the 
archive there are - stack_1 and stack_2 when Solr was still able to respond, 
stack_3 is Solr barely alive (with more than 80 - 90% errors when reported by 
JMeter) and stack_4 is Solr not responding at all.

 SolrCloud hangs under certain conditions
 

 Key: SOLR-5935
 URL: https://issues.apache.org/jira/browse/SOLR-5935
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Rafał Kuć
Priority: Critical
 Attachments: thread dumps.zip


 As discussed in a mailing list - let's try to find the reason while under 
 certain conditions SolrCloud can hang.
 I have an issue with one of the SolrCloud deployments. Six machines, a 
 collection with 6 shards with a replication factor of 3. It all runs on 6 
 physical servers, each with 24 cores. We've indexed about 32 million 
 documents and everything was fine until that point.
 Now, during performance tests, we run into an issue - SolrCloud hangs
 when querying and indexing is run at the same time. First we see a
 normal load on the machines, than the load starts to drop and thread
 dump shown numerous threads like this:
 {noformat}
 Thread 12624: (state = BLOCKED)
  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
 may be imprecise)
  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
 line=186 (Compiled frame)
  - 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
 @bci=42, line=2043 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
 line=131 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
 java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.get(long, 
 java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
  - 
 org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
 line=456 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
 line=906 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
  @bci=6, line=784 (Compiled frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
 (Interpreted frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
  @bci=17, line=199 (Compiled frame)
  - 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
  @bci=132, line=285 (Interpreted frame)
  - 
 org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
  java.util.List) @bci=13, line=214 (Compiled frame)
  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
 line=161 (Compiled frame)
  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
 line=118 (Interpreted frame)
  - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
 (Interpreted frame)
  - java.util.concurrent.FutureTask.run() @bci=4, line=166 (Compiled frame)
  - 

[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-31 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955541#comment-13955541
 ] 

Yonik Seeley commented on SOLR-5935:


bq. I wonder if you are hitting a connection pool limit or something.

That was my thought - sounds like distributed deadlock (the same reason we 
don't have a practical limit on the number of threads configured in jetty).
We should not have a connection limit for any request that could possibly cause 
another synchronous request to come back to us.

 SolrCloud hangs under certain conditions
 

 Key: SOLR-5935
 URL: https://issues.apache.org/jira/browse/SOLR-5935
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Rafał Kuć
Priority: Critical
 Attachments: thread dumps.zip


 As discussed in a mailing list - let's try to find the reason while under 
 certain conditions SolrCloud can hang.
 I have an issue with one of the SolrCloud deployments. Six machines, a 
 collection with 6 shards with a replication factor of 3. It all runs on 6 
 physical servers, each with 24 cores. We've indexed about 32 million 
 documents and everything was fine until that point.
 Now, during performance tests, we run into an issue - SolrCloud hangs
 when querying and indexing is run at the same time. First we see a
 normal load on the machines, than the load starts to drop and thread
 dump shown numerous threads like this:
 {noformat}
 Thread 12624: (state = BLOCKED)
  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
 may be imprecise)
  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
 line=186 (Compiled frame)
  - 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
 @bci=42, line=2043 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
 line=131 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
 java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.get(long, 
 java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
  - 
 org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
 line=456 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
 line=906 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
  @bci=6, line=784 (Compiled frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
 (Interpreted frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
  @bci=17, line=199 (Compiled frame)
  - 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
  @bci=132, line=285 (Interpreted frame)
  - 
 org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
  java.util.List) @bci=13, line=214 (Compiled frame)
  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
 line=161 (Compiled frame)
  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
 line=118 (Interpreted frame)
  - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
 (Interpreted frame)
  - java.util.concurrent.FutureTask.run() 

[jira] [Commented] (SOLR-5935) SolrCloud hangs under certain conditions

2014-03-31 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955546#comment-13955546
 ] 

Mark Miller commented on SOLR-5935:
---

bq. Mark - thread dumps are attached in the zip file,

Sorry - was following along via email.

Yeah, these are all in lease connection. Seems like a connection pool 
configuration issue. I think we recently exposed config for some of that to the 
user, but I'll have to go dig that up.

 SolrCloud hangs under certain conditions
 

 Key: SOLR-5935
 URL: https://issues.apache.org/jira/browse/SOLR-5935
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
Reporter: Rafał Kuć
Priority: Critical
 Attachments: thread dumps.zip


 As discussed in a mailing list - let's try to find the reason while under 
 certain conditions SolrCloud can hang.
 I have an issue with one of the SolrCloud deployments. Six machines, a 
 collection with 6 shards with a replication factor of 3. It all runs on 6 
 physical servers, each with 24 cores. We've indexed about 32 million 
 documents and everything was fine until that point.
 Now, during performance tests, we run into an issue - SolrCloud hangs
 when querying and indexing is run at the same time. First we see a
 normal load on the machines, than the load starts to drop and thread
 dump shown numerous threads like this:
 {noformat}
 Thread 12624: (state = BLOCKED)
  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
 may be imprecise)
  - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
 line=186 (Compiled frame)
  - 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() 
 @bci=42, line=2043 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.await(java.util.Date) @bci=50, 
 line=131 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(java.lang.Object, 
 java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=431, line=281 (Compiled frame)
  - 
 org.apache.http.pool.AbstractConnPool.access$000(org.apache.http.pool.AbstractConnPool,
  java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, 
 org.apache.http.pool.PoolEntryFuture) @bci=8, line=62 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=15, line=176 (Compiled frame)
  - org.apache.http.pool.AbstractConnPool$2.getPoolEntry(long, 
 java.util.concurrent.TimeUnit) @bci=3, line=169 (Compiled frame)
  - org.apache.http.pool.PoolEntryFuture.get(long, 
 java.util.concurrent.TimeUnit) @bci=38, line=100 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(java.util.concurrent.Future,
  long, java.util.concurrent.TimeUnit) @bci=4, line=212 (Compiled frame)
  - 
 org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(long,
  java.util.concurrent.TimeUnit) @bci=10, line=199 (Compiled frame)
  - 
 org.apache.http.impl.client.DefaultRequestDirector.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=259, 
 line=456 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.HttpHost,
  org.apache.http.HttpRequest, org.apache.http.protocol.HttpContext) @bci=344, 
 line=906 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest,
  org.apache.http.protocol.HttpContext) @bci=21, line=805 (Compiled frame)
  - 
 org.apache.http.impl.client.AbstractHttpClient.execute(org.apache.http.client.methods.HttpUriRequest)
  @bci=6, line=784 (Compiled frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest,
  org.apache.solr.client.solrj.ResponseParser) @bci=1175, line=395 
 (Interpreted frame)
  - 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(org.apache.solr.client.solrj.SolrRequest)
  @bci=17, line=199 (Compiled frame)
  - 
 org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(org.apache.solr.client.solrj.impl.LBHttpSolrServer$Req)
  @bci=132, line=285 (Interpreted frame)
  - 
 org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(org.apache.solr.client.solrj.request.QueryRequest,
  java.util.List) @bci=13, line=214 (Compiled frame)
  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=246, 
 line=161 (Compiled frame)
  - org.apache.solr.handler.component.HttpShardHandler$1.call() @bci=1, 
 line=118 (Interpreted frame)
  - java.util.concurrent.FutureTask$Sync.innerRun() @bci=29, line=334 
 (Interpreted frame)
  - java.util.concurrent.FutureTask.run() @bci=4, line=166 (Compiled frame)
  - 

  1   2   >