[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+140) - Build # 2200 - Unstable!

2016-11-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2200/
Java: 64bit/jdk-9-ea+140 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:36909/etiyf/qy","node_name":"127.0.0.1:36909_etiyf%2Fqy","state":"active","leader":"true"}];
 clusterState: 
DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/17)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:44877/etiyf/qy;,   
"core":"c8n_1x3_lf_shard1_replica2",   
"node_name":"127.0.0.1:44877_etiyf%2Fqy"}, "core_node2":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:46577/etiyf/qy;,   
"node_name":"127.0.0.1:46577_etiyf%2Fqy",   "state":"down"}, 
"core_node3":{   "core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:36909/etiyf/qy;,   
"node_name":"127.0.0.1:36909_etiyf%2Fqy",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:36909/etiyf/qy","node_name":"127.0.0.1:36909_etiyf%2Fqy","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/17)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:44877/etiyf/qy;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:44877_etiyf%2Fqy"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:46577/etiyf/qy;,
  "node_name":"127.0.0.1:46577_etiyf%2Fqy",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:36909/etiyf/qy;,
  "node_name":"127.0.0.1:36909_etiyf%2Fqy",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([3AA3AC2B2397CE57:B2F793F18D6BA3AF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 494 - Still Unstable!

2016-11-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/494/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor171.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:705)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:767)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1006)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:871)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:775)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor171.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:705)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:767)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1006)
at org.apache.solr.core.SolrCore.(SolrCore.java:871)
at org.apache.solr.core.SolrCore.(SolrCore.java:775)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)
at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([ECA3B775C23F2E82]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:260)
at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Commented] (SOLR-8994) EmbeddedSolrServer does not provide the httpMethod to the handler

2016-11-16 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15673052#comment-15673052
 ] 

Mikhail Khludnev commented on SOLR-8994:


What's exact problem is addressed here? Can we have an unit test which is 
failing without this change? 

> EmbeddedSolrServer does not provide the httpMethod to the handler
> -
>
> Key: SOLR-8994
> URL: https://issues.apache.org/jira/browse/SOLR-8994
> Project: Solr
>  Issue Type: Bug
>Reporter: Nicolas Gavalda
>  Labels: embedded
> Attachments: SOLR-8994-EmbeddedSolrServer-httpMethod.patch
>
>
> The modification URIs of the schema API don't work when using an 
> EmbeddedSolrServer: the SchemaHandler verifies that modification requests are 
> POST, and the EmbeddedSolrServer doesn't transmit this information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2016-11-16 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672990#comment-15672990
 ] 

Shalin Shekhar Mangar commented on SOLR-8327:
-

No, this is still very relevant. SOLR-9014 was about ensuring that we don't 
repeatedly fetch the collection state from ZK in the context of the same 
request -- and admittedly, we're not quite there but better than before. This 
issue is about not going to ZK to fetch collection state for different requests 
because SolrDispatchFilter has no caching for collection states like the SolrJ 
client does.

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
> Attachments: SOLR-8327.patch
>
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9774) Delta indexing with child documents with help of cacheImpl="SortedMapBackedCache"

2016-11-16 Thread Aniket Khare (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672976#comment-15672976
 ] 

Aniket Khare edited comment on SOLR-9774 at 11/17/16 6:55 AM:
--

I already have subscribed to mailing list and posted the question 
"solr-u...@lucene.apache.org", but did not get any reply.
Also please note that the same configuration is working for Full-import, 
delta-import for existing data but it is not working for the delta that don't 
have child documents and we add one.So, not sure if this is configuration issue.


was (Author: aniketish...@gmail.com):
I already have subscribed ti maining list and posted the question 
"solr-u...@lucene.apache.org", but did not get any reply.
Also please note that the same configuration i working for Full-import, 
delta-import for existing data but it is not working for the delta that ont 
have child documents and we add one.So, not sure if this is configuration issue.

> Delta indexing with child documents with help of 
> cacheImpl="SortedMapBackedCache"
> -
>
> Key: SOLR-9774
> URL: https://issues.apache.org/jira/browse/SOLR-9774
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler, Data-driven Schema
>Affects Versions: 6.1
>Reporter: Aniket Khare
>  Labels: DIH, solr
>
> Hi,
> I am using solr DIH for indexing the Parent-Child relation data and using 
> cacheImpl="SortedMapBackedCache".
> For Full data indexinf I am using command clean="true" and for delta I am 
> using command full-import and clean="false".
> So the same queries are being executed for fulland delta and indexing working 
> properly.
> The issue which we are facing is where for a perticuler parent document, 
> there not a single child document and we are adding new child document.
> Following are the steps to reproduce the issue.
> 1. Add Child document to an existing parent document which is not having 
> empty child document.
> 2. Once the child document is added with delta indexing, try to modify the 
> parent document and run delta indexing again
> 3. After the delta indexing is completed, I can see the modified child 
> documents showing in Solr DIH page in debug mode. But the it is not getting 
> updated in Solr collection.
> I am using data config as below as below.
>   
> 
>   
>   
>   
>  cacheKey="id" cacheLookup="Parent.id" processor="SqlEntityProcessor" 
> cacheImpl="SortedMapBackedCache">
> 
> 
>   
> cacheKey="PID" cacheLookup="Parent.id" processor="SqlEntityProcessor" 
> cacheImpl="SortedMapBackedCache" child="true">
> 
>   
>   
> 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9774) Delta indexing with child documents with help of cacheImpl="SortedMapBackedCache"

2016-11-16 Thread Aniket Khare (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672976#comment-15672976
 ] 

Aniket Khare commented on SOLR-9774:


I already have subscribed ti maining list and posted the question 
"solr-u...@lucene.apache.org", but did not get any reply.
Also please note that the same configuration i working for Full-import, 
delta-import for existing data but it is not working for the delta that ont 
have child documents and we add one.So, not sure if this is configuration issue.

> Delta indexing with child documents with help of 
> cacheImpl="SortedMapBackedCache"
> -
>
> Key: SOLR-9774
> URL: https://issues.apache.org/jira/browse/SOLR-9774
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler, Data-driven Schema
>Affects Versions: 6.1
>Reporter: Aniket Khare
>  Labels: DIH, solr
>
> Hi,
> I am using solr DIH for indexing the Parent-Child relation data and using 
> cacheImpl="SortedMapBackedCache".
> For Full data indexinf I am using command clean="true" and for delta I am 
> using command full-import and clean="false".
> So the same queries are being executed for fulland delta and indexing working 
> properly.
> The issue which we are facing is where for a perticuler parent document, 
> there not a single child document and we are adding new child document.
> Following are the steps to reproduce the issue.
> 1. Add Child document to an existing parent document which is not having 
> empty child document.
> 2. Once the child document is added with delta indexing, try to modify the 
> parent document and run delta indexing again
> 3. After the delta indexing is completed, I can see the modified child 
> documents showing in Solr DIH page in debug mode. But the it is not getting 
> updated in Solr collection.
> I am using data config as below as below.
>   
> 
>   
>   
>   
>  cacheKey="id" cacheLookup="Parent.id" processor="SqlEntityProcessor" 
> cacheImpl="SortedMapBackedCache">
> 
> 
>   
> cacheKey="PID" cacheLookup="Parent.id" processor="SqlEntityProcessor" 
> cacheImpl="SortedMapBackedCache" child="true">
> 
>   
>   
> 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 950 - Unstable!

2016-11-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/950/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor183.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:723)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:785)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1024)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:889)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:793)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:868)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:517)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor183.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:723)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:785)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1024)
at org.apache.solr.core.SolrCore.(SolrCore.java:889)
at org.apache.solr.core.SolrCore.(SolrCore.java:793)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:868)
at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:517)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([B6D9641594DB7180]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:260)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 1479 - Failure

2016-11-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1479/

No tests ran.

Build Log:
[...truncated 11034 lines...]
   [junit4] Suite: org.apache.solr.cloud.ShardSplitTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J2/temp/solr.cloud.ShardSplitTest_961EAAE602E40164-001/init-core-data-001
   [junit4]   2> 291237 INFO  
(SUITE-ShardSplitTest-seed#[961EAAE602E40164]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
   [junit4]   2> 291238 INFO  
(SUITE-ShardSplitTest-seed#[961EAAE602E40164]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 291249 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 291253 INFO  (Thread-253) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 291253 INFO  (Thread-253) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 291639 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.c.ZkTestServer start zk server on port:47321
   [junit4]   2> 291707 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 291709 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 291711 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 291712 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 291714 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 291715 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 291716 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 291718 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 291719 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
 to /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 291721 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt
 to /configs/conf1/old_synonyms.txt
   [junit4]   2> 291722 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/test-files/solr/collection1/conf/synonyms.txt
 to /configs/conf1/synonyms.txt
   [junit4]   2> 291884 INFO  
(TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[961EAAE602E40164]) [ 
   ] o.a.s.SolrTestCaseJ4 Writing core.properties file to 

[jira] [Commented] (LUCENE-7466) add axiomatic similarity

2016-11-16 Thread Peilin Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672434#comment-15672434
 ] 

Peilin Yang commented on LUCENE-7466:
-

Thanks for pointing this out.

For the test cases, since all of the variations extend from the base Axiomatic 
class and all the constructors all basically the same (except AxiomaticF3EXP 
where a queryLen is needed that is why there is a QL test) so I just pick F2EXP 
to test.

Does this make any sense to you?

> add axiomatic similarity 
> -
>
> Key: LUCENE-7466
> URL: https://issues.apache.org/jira/browse/LUCENE-7466
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Peilin Yang
>Assignee: Tommaso Teofili
>  Labels: patch
>
> Add axiomatic similarity approaches to the similarity family.
> More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and 
> https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf
> There are in total six similarity models. All of them are based on BM25, 
> Pivoted Document Length Normalization or Language Model with Dirichlet prior. 
> We think it is worthy to add the models as part of Lucene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2016-11-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672391#comment-15672391
 ] 

Erick Erickson commented on SOLR-8327:
--

[~shalinmangar] Does SOLR-9014 being fixed mean we can close this JIRA? I just 
happened across this while looking at something else and wondered.

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
> Attachments: SOLR-8327.patch
>
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_102) - Build # 561 - Still Unstable!

2016-11-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/561/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"/btin/n", "path":"/test1", 
"httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":null},  from 
server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val' for path 'x' 
full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"/btin/n",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":null},  from server:  null
at 
__randomizedtesting.SeedInfo.seed([A32FADC9BE2E7A2E:7B62809E49F3DF8E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:535)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:232)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9779) Basic auth in not supported in Streaming Expressions

2016-11-16 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672191#comment-15672191
 ] 

Kevin Risden commented on SOLR-9779:


[~wiredcity11] - I was looking into this and was curious if this approach could 
work for you:

https://gist.github.com/risdenk/e39bd75321f5b6d0c338febac8b67bd4

This is an example of a BasicAuthSolrClientCache. You could use it like so:
{code}
StreamFactory factory = new 
StreamFactory().withDefaultZkHost(solrConfig.getConnectString());
StreamExpression expression = StreamExpressionParser.parse("search(" + 
COLLECTIONORALIAS + ", q=*:*, fl=\"id,a_s,a_i,a_f\", sort=\"a_f asc, a_i 
asc\")");

CloudSolrStream stream = new CloudSolrStream(expression, factory);
StreamContext streamContext = new StreamContext();
streamContext.setSolrClientCache(new BasicAuthSolrClientCache("solr", 
"SolrRocks"));
stream.setStreamContext(streamContext);

// Open and reach stream...
{code}

> Basic auth in not supported in Streaming Expressions
> 
>
> Key: SOLR-9779
> URL: https://issues.apache.org/jira/browse/SOLR-9779
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, security
>Affects Versions: 6.0
>Reporter: Sandeep Mukherjee
>  Labels: features, security
> Fix For: 6.4
>
>
> I'm creating a StreamFactory object like the following code:
> {code}
> new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
> .withFunctionName("gatherNodes", GatherNodesStream.class);
> {code}
> However once I create the StreamFactory there is no way provided to set the 
> CloudSolrClient object which can be used to set Basic Auth headers.
> In StreamContext object there is a way to set the SolrClientCache object 
> which keep reference to all the CloudSolrClient where I can set a reference 
> to HttpClient which sets the Basic Auth header. However the problem is, 
> inside the SolrClientCache there is no way to set your own version of 
> CloudSolrClient with BasicAuth enabled. 
> I think we should expose method in StreamContext where I can specify 
> basic-auth enabled CloudSolrClient to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9779) Basic auth in not supported in Streaming Expressions

2016-11-16 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672191#comment-15672191
 ] 

Kevin Risden edited comment on SOLR-9779 at 11/17/16 12:23 AM:
---

[~wiredcity11] - I was looking into this and was curious if this approach could 
work for you:

https://gist.github.com/risdenk/e39bd75321f5b6d0c338febac8b67bd4

This is an example of a BasicAuthSolrClientCache. You could use it like so:
{code}
StreamFactory factory = new 
StreamFactory().withDefaultZkHost(solrConfig.getConnectString());
StreamExpression expression = StreamExpressionParser.parse("search(" + 
COLLECTIONORALIAS + ", q=*:*, fl=\"id,a_s,a_i,a_f\", sort=\"a_f asc, a_i 
asc\")");

CloudSolrStream stream = new CloudSolrStream(expression, factory);
StreamContext streamContext = new StreamContext();
streamContext.setSolrClientCache(new BasicAuthSolrClientCache("solr", 
"SolrRocks"));
stream.setStreamContext(streamContext);

// Open and read stream...
{code}


was (Author: risdenk):
[~wiredcity11] - I was looking into this and was curious if this approach could 
work for you:

https://gist.github.com/risdenk/e39bd75321f5b6d0c338febac8b67bd4

This is an example of a BasicAuthSolrClientCache. You could use it like so:
{code}
StreamFactory factory = new 
StreamFactory().withDefaultZkHost(solrConfig.getConnectString());
StreamExpression expression = StreamExpressionParser.parse("search(" + 
COLLECTIONORALIAS + ", q=*:*, fl=\"id,a_s,a_i,a_f\", sort=\"a_f asc, a_i 
asc\")");

CloudSolrStream stream = new CloudSolrStream(expression, factory);
StreamContext streamContext = new StreamContext();
streamContext.setSolrClientCache(new BasicAuthSolrClientCache("solr", 
"SolrRocks"));
stream.setStreamContext(streamContext);

// Open and reach stream...
{code}

> Basic auth in not supported in Streaming Expressions
> 
>
> Key: SOLR-9779
> URL: https://issues.apache.org/jira/browse/SOLR-9779
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, security
>Affects Versions: 6.0
>Reporter: Sandeep Mukherjee
>  Labels: features, security
> Fix For: 6.4
>
>
> I'm creating a StreamFactory object like the following code:
> {code}
> new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
> .withFunctionName("gatherNodes", GatherNodesStream.class);
> {code}
> However once I create the StreamFactory there is no way provided to set the 
> CloudSolrClient object which can be used to set Basic Auth headers.
> In StreamContext object there is a way to set the SolrClientCache object 
> which keep reference to all the CloudSolrClient where I can set a reference 
> to HttpClient which sets the Basic Auth header. However the problem is, 
> inside the SolrClientCache there is no way to set your own version of 
> CloudSolrClient with BasicAuth enabled. 
> I think we should expose method in StreamContext where I can specify 
> basic-auth enabled CloudSolrClient to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_102) - Build # 6225 - Still Unstable!

2016-11-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6225/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.admin.CoreAdminHandlerTest.testDeleteInstanceDirAfterCreateFailure

Error Message:
The data directory was not cleaned up on unload after a failed core reload

Stack Trace:
java.lang.AssertionError: The data directory was not cleaned up on unload after 
a failed core reload
at 
__randomizedtesting.SeedInfo.seed([2424B79273EC41DE:5FED15DE50CE93DF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.admin.CoreAdminHandlerTest.testDeleteInstanceDirAfterCreateFailure(CoreAdminHandlerTest.java:334)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11784 lines...]
   [junit4] Suite: org.apache.solr.handler.admin.CoreAdminHandlerTest
   [junit4]   2> Creating 

[jira] [Resolved] (SOLR-9609) Change hard-coded keysize from 512 to 1024

2016-11-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-9609.
--
   Resolution: Fixed
Fix Version/s: 6.4
   trunk

> Change hard-coded keysize from 512 to 1024
> --
>
> Key: SOLR-9609
> URL: https://issues.apache.org/jira/browse/SOLR-9609
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jeremy Martini
>Assignee: Erick Erickson
> Fix For: trunk, 6.4
>
> Attachments: SOLR-9609.patch, SOLR-9609.patch, SOLR-9609.patch, 
> SOLR-9609.patch, solr.log
>
>
> In order to configure our dataSource without requiring a plaintext password 
> in the configuration file, we extended JdbcDataSource to create our own 
> custom implementation. Our dataSource config now looks something like this:
> {code:xml}
>  url="jdbc:oracle:thin:@db-host-machine:1521:tst1" user="testuser" 
> password="{ENC}{1.1}1ePOfWcbOIU056gKiLTrLw=="/>
> {code}
> We are using the RSA JSAFE Crypto-J libraries for encrypting/decrypting the 
> password. However, this seems to cause an issue when we try use Solr in a 
> Cloud Configuration (using Zookeeper). The error is "Strong key gen and 
> multiprime gen require at least 1024-bit keysize." Full log attached.
> This seems to be due to the hard-coded value of 512 in the 
> org.apache.solr.util.CryptoKeys$RSAKeyPair class:
> {code:java}
> public RSAKeyPair() {
>   KeyPairGenerator keyGen = null;
>   try {
> keyGen = KeyPairGenerator.getInstance("RSA");
>   } catch (NoSuchAlgorithmException e) {
> throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
>   }
>   keyGen.initialize(512);
> {code}
> I pulled down the Solr code, changed the hard-coded value to 1024, rebuilt 
> it, and now everything seems to work great.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7564) AnalyzingInfixSuggester should close its IndexWriter by default at the end of build()

2016-11-16 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-7564.

   Resolution: Fixed
 Assignee: Steve Rowe
Fix Version/s: 6.4
   master (7.0)
Lucene Fields:   (was: New)

Thanks [~mikemccand] for the review.


> AnalyzingInfixSuggester should close its IndexWriter by default at the end of 
> build()
> -
>
> Key: LUCENE-7564
> URL: https://issues.apache.org/jira/browse/LUCENE-7564
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7564.patch, LUCENE-7564.patch
>
>
> From SOLR-6246, where AnalyzingInfixSuggester's write lock on its index is 
> causing trouble when reloading a Solr core:
> [~gsingers] wrote:
> bq. One suggestion that might minimize the impact: close the writer after 
> build
> [~varunthacker] wrote:
> {quote}
> This is what I am thinking -
> Create a Lucene issue in which {{AnalyzingInfixSuggester#build}} closes the 
> writer by default at the end.
> The {{add}} and {{update}} methods call {{ensureOpen}} and those who do 
> frequent real time updates directly via lucene won't see any slowdowns.
> [~mikemccand] - Would this approach have any major drawback from Lucene's 
> perspective? Else I can go ahead an tackle this in a Lucene issue
> {quote}
> [~mikemccand] wrote:
> bq. Fixing {{AnalyzingInfixSuggester}} to close the writer at the end of 
> build seems reasonable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-9606) Remove GeoHashField from Solr

2016-11-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-9606:
-
Comment: was deleted

(was: Commit e402a304bf97ead8c2a7f00a745e837fe0c6d449 in lucene-solr's branch 
refs/heads/master from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e402a30 ]

SOLR-9606: Change hard-coded keysize from 512 to 1024
)

> Remove GeoHashField from Solr
> -
>
> Key: SOLR-9606
> URL: https://issues.apache.org/jira/browse/SOLR-9606
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0)
>
>
> I'd like to remove GeoHashField from Solr -- 7.0.  I see no use-case for it 
> any more.  And it's presence is distracting from spatial fields you *should* 
> be using.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-9606) Remove GeoHashField from Solr

2016-11-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-9606:
-
Comment: was deleted

(was: Commit 8bd4ad36c5297cfd2c39be807a7f099cda4ec13e in lucene-solr's branch 
refs/heads/branch_6x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8bd4ad3 ]

SOLR-9606: Change hard-coded keysize from 512 to 1024
(cherry picked from commit e402a30)
)

> Remove GeoHashField from Solr
> -
>
> Key: SOLR-9606
> URL: https://issues.apache.org/jira/browse/SOLR-9606
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0)
>
>
> I'd like to remove GeoHashField from Solr -- 7.0.  I see no use-case for it 
> any more.  And it's presence is distracting from spatial fields you *should* 
> be using.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9606) Remove GeoHashField from Solr

2016-11-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672018#comment-15672018
 ] 

Erick Erickson commented on SOLR-9606:
--

Yep, just noticed that when I went to lose 9609. I'm batting 1,000 on 
committing lately.

> Remove GeoHashField from Solr
> -
>
> Key: SOLR-9606
> URL: https://issues.apache.org/jira/browse/SOLR-9606
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0)
>
>
> I'd like to remove GeoHashField from Solr -- 7.0.  I see no use-case for it 
> any more.  And it's presence is distracting from spatial fields you *should* 
> be using.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9609) Change hard-coded keysize from 512 to 1024

2016-11-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672014#comment-15672014
 ] 

Erick Erickson commented on SOLR-9609:
--

Messed up JIRA number, here are the commits, typed 9606 instead of 9609:

Commit e402a304bf97ead8c2a7f00a745e837fe0c6d449 in lucene-solr's branch 
refs/heads/master from Erick Erickson
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e402a30 ]

SOLR-9606: Change hard-coded keysize from 512 to 1024


Commit 8bd4ad36c5297cfd2c39be807a7f099cda4ec13e in lucene-solr's branch 
refs/heads/branch_6x from Erick Erickson
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8bd4ad3 ]

SOLR-9606: Change hard-coded keysize from 512 to 1024
(cherry picked from commit e402a30)



> Change hard-coded keysize from 512 to 1024
> --
>
> Key: SOLR-9609
> URL: https://issues.apache.org/jira/browse/SOLR-9609
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jeremy Martini
>Assignee: Erick Erickson
> Attachments: SOLR-9609.patch, SOLR-9609.patch, SOLR-9609.patch, 
> SOLR-9609.patch, solr.log
>
>
> In order to configure our dataSource without requiring a plaintext password 
> in the configuration file, we extended JdbcDataSource to create our own 
> custom implementation. Our dataSource config now looks something like this:
> {code:xml}
>  url="jdbc:oracle:thin:@db-host-machine:1521:tst1" user="testuser" 
> password="{ENC}{1.1}1ePOfWcbOIU056gKiLTrLw=="/>
> {code}
> We are using the RSA JSAFE Crypto-J libraries for encrypting/decrypting the 
> password. However, this seems to cause an issue when we try use Solr in a 
> Cloud Configuration (using Zookeeper). The error is "Strong key gen and 
> multiprime gen require at least 1024-bit keysize." Full log attached.
> This seems to be due to the hard-coded value of 512 in the 
> org.apache.solr.util.CryptoKeys$RSAKeyPair class:
> {code:java}
> public RSAKeyPair() {
>   KeyPairGenerator keyGen = null;
>   try {
> keyGen = KeyPairGenerator.getInstance("RSA");
>   } catch (NoSuchAlgorithmException e) {
> throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
>   }
>   keyGen.initialize(512);
> {code}
> I pulled down the Solr code, changed the hard-coded value to 1024, rebuilt 
> it, and now everything seems to work great.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7564) AnalyzingInfixSuggester should close its IndexWriter by default at the end of build()

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672010#comment-15672010
 ] 

ASF subversion and git services commented on LUCENE-7564:
-

Commit f9a0693bf98a4000b6568e7c63f3e303118470bd in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f9a0693 ]

LUCENE-7564: AnalyzingInfixSuggester should close its IndexWriter by default at 
the end of build()


> AnalyzingInfixSuggester should close its IndexWriter by default at the end of 
> build()
> -
>
> Key: LUCENE-7564
> URL: https://issues.apache.org/jira/browse/LUCENE-7564
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
> Attachments: LUCENE-7564.patch, LUCENE-7564.patch
>
>
> From SOLR-6246, where AnalyzingInfixSuggester's write lock on its index is 
> causing trouble when reloading a Solr core:
> [~gsingers] wrote:
> bq. One suggestion that might minimize the impact: close the writer after 
> build
> [~varunthacker] wrote:
> {quote}
> This is what I am thinking -
> Create a Lucene issue in which {{AnalyzingInfixSuggester#build}} closes the 
> writer by default at the end.
> The {{add}} and {{update}} methods call {{ensureOpen}} and those who do 
> frequent real time updates directly via lucene won't see any slowdowns.
> [~mikemccand] - Would this approach have any major drawback from Lucene's 
> perspective? Else I can go ahead an tackle this in a Lucene issue
> {quote}
> [~mikemccand] wrote:
> bq. Fixing {{AnalyzingInfixSuggester}} to close the writer at the end of 
> build seems reasonable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7564) AnalyzingInfixSuggester should close its IndexWriter by default at the end of build()

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672009#comment-15672009
 ] 

ASF subversion and git services commented on LUCENE-7564:
-

Commit 4fedb640ab66920ea11c26ad520639912c95ff2c in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4fedb64 ]

LUCENE-7564: AnalyzingInfixSuggester should close its IndexWriter by default at 
the end of build()


> AnalyzingInfixSuggester should close its IndexWriter by default at the end of 
> build()
> -
>
> Key: LUCENE-7564
> URL: https://issues.apache.org/jira/browse/LUCENE-7564
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
> Attachments: LUCENE-7564.patch, LUCENE-7564.patch
>
>
> From SOLR-6246, where AnalyzingInfixSuggester's write lock on its index is 
> causing trouble when reloading a Solr core:
> [~gsingers] wrote:
> bq. One suggestion that might minimize the impact: close the writer after 
> build
> [~varunthacker] wrote:
> {quote}
> This is what I am thinking -
> Create a Lucene issue in which {{AnalyzingInfixSuggester#build}} closes the 
> writer by default at the end.
> The {{add}} and {{update}} methods call {{ensureOpen}} and those who do 
> frequent real time updates directly via lucene won't see any slowdowns.
> [~mikemccand] - Would this approach have any major drawback from Lucene's 
> perspective? Else I can go ahead an tackle this in a Lucene issue
> {quote}
> [~mikemccand] wrote:
> bq. Fixing {{AnalyzingInfixSuggester}} to close the writer at the end of 
> build seems reasonable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 2197 - Still Unstable!

2016-11-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2197/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.asserting.TestAssertingDocValuesFormat

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([9CFA98DB3EC0F5F0]:0)


FAILED:  
org.apache.lucene.codecs.asserting.TestAssertingDocValuesFormat.testZeroOrMin

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([9CFA98DB3EC0F5F0]:0)




Build Log:
[...truncated 2053 lines...]
   [junit4] Suite: 
org.apache.lucene.codecs.asserting.TestAssertingDocValuesFormat
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestAssertingDocValuesFormat -Dtests.method=testZeroOrMin 
-Dtests.seed=9CFA98DB3EC0F5F0 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=vi-VN -Dtests.timezone=America/Antigua -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] ERROR   7171s J2 | TestAssertingDocValuesFormat.testZeroOrMin <<<
   [junit4]> Throwable #1: java.lang.Exception: Test abandoned because 
suite timeout was reached.
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([9CFA98DB3EC0F5F0]:0)
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/test-framework/test/J2/temp/lucene.codecs.asserting.TestAssertingDocValuesFormat_9CFA98DB3EC0F5F0-001
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): {}, 
docValues:{}, maxPointsInLeafNode=353, maxMBSortInHeap=7.4011549977882725, 
sim=RandomSimilarity(queryNorm=false,coord=no): {fieldname=LM 
Jelinek-Mercer(0.70), docId=LM Jelinek-Mercer(0.70), id=DFR G3(800.0)}, 
locale=vi-VN, timezone=America/Antigua
   [junit4]   2> NOTE: Linux 4.4.0-47-generic amd64/Oracle Corporation 
1.8.0_102 (64-bit)/cpus=12,threads=1,free=452024808,total=508887040
   [junit4]   2> NOTE: All tests run in this JVM: [TestLookaheadTokenFilter, 
TestExtrasFS, NestedSetupChain, NestedTeardownChain, TestLeakFS, Nested, 
TestAssertingLeafReader, Before3, Before3, TestGraphTokenizers, 
TestMockAnalyzer, Nested, Nested2, TestMockDirectoryWrapper, 
TestMockSynonymFilter, TestAssertingDocValuesFormat]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestAssertingDocValuesFormat -Dtests.seed=9CFA98DB3EC0F5F0 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=vi-VN 
-Dtests.timezone=America/Antigua -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] ERROR   0.00s J2 | TestAssertingDocValuesFormat (suite) <<<
   [junit4]> Throwable #1: java.lang.Exception: Suite timeout exceeded (>= 
720 msec).
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([9CFA98DB3EC0F5F0]:0)
   [junit4] Completed [42/42 (1!)] on J2 in 7220.06s, 87 tests, 2 errors <<< 
FAILURES!

[...truncated 71547 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1155 - Still Unstable

2016-11-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1155/

5 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testUpdateLogSynchronisation

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard1

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard1
at 
__randomizedtesting.SeedInfo.seed([DB1913FFF5D99424:25764B5C37F9B735]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testUpdateLogSynchronisation(CdcrReplicationDistributedZkTest.java:376)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Closed] (SOLR-9780) Solr managed schema should allow specifying the uniqueKey.

2016-11-16 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-9780.
--
Resolution: Duplicate

> Solr managed schema should allow specifying the uniqueKey.
> --
>
> Key: SOLR-9780
> URL: https://issues.apache.org/jira/browse/SOLR-9780
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>
> The uniqueKey appears to not be configurable in the REST API for modifying 
> the schema.  It's kind of an odd omission; even long-time deprecated stuff 
> (e.g. query parser default operator) is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9606) Remove GeoHashField from Solr

2016-11-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671789#comment-15671789
 ] 

David Smiley commented on SOLR-9606:


Wrong issue [~erickerickson]

> Remove GeoHashField from Solr
> -
>
> Key: SOLR-9606
> URL: https://issues.apache.org/jira/browse/SOLR-9606
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0)
>
>
> I'd like to remove GeoHashField from Solr -- 7.0.  I see no use-case for it 
> any more.  And it's presence is distracting from spatial fields you *should* 
> be using.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7564) AnalyzingInfixSuggester should close its IndexWriter by default at the end of build()

2016-11-16 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671791#comment-15671791
 ] 

Michael McCandless commented on LUCENE-7564:


+1, thanks [~steve_rowe].

> AnalyzingInfixSuggester should close its IndexWriter by default at the end of 
> build()
> -
>
> Key: LUCENE-7564
> URL: https://issues.apache.org/jira/browse/LUCENE-7564
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
> Attachments: LUCENE-7564.patch, LUCENE-7564.patch
>
>
> From SOLR-6246, where AnalyzingInfixSuggester's write lock on its index is 
> causing trouble when reloading a Solr core:
> [~gsingers] wrote:
> bq. One suggestion that might minimize the impact: close the writer after 
> build
> [~varunthacker] wrote:
> {quote}
> This is what I am thinking -
> Create a Lucene issue in which {{AnalyzingInfixSuggester#build}} closes the 
> writer by default at the end.
> The {{add}} and {{update}} methods call {{ensureOpen}} and those who do 
> frequent real time updates directly via lucene won't see any slowdowns.
> [~mikemccand] - Would this approach have any major drawback from Lucene's 
> perspective? Else I can go ahead an tackle this in a Lucene issue
> {quote}
> [~mikemccand] wrote:
> bq. Fixing {{AnalyzingInfixSuggester}} to close the writer at the end of 
> build seems reasonable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9780) Solr managed schema should allow specifying the uniqueKey.

2016-11-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671759#comment-15671759
 ] 

Steve Rowe commented on SOLR-9780:
--

This is a subset of SOLR-7242.

> Solr managed schema should allow specifying the uniqueKey.
> --
>
> Key: SOLR-9780
> URL: https://issues.apache.org/jira/browse/SOLR-9780
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>
> The uniqueKey appears to not be configurable in the REST API for modifying 
> the schema.  It's kind of an odd omission; even long-time deprecated stuff 
> (e.g. query parser default operator) is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9606) Remove GeoHashField from Solr

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671754#comment-15671754
 ] 

ASF subversion and git services commented on SOLR-9606:
---

Commit 8bd4ad36c5297cfd2c39be807a7f099cda4ec13e in lucene-solr's branch 
refs/heads/branch_6x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8bd4ad3 ]

SOLR-9606: Change hard-coded keysize from 512 to 1024
(cherry picked from commit e402a30)


> Remove GeoHashField from Solr
> -
>
> Key: SOLR-9606
> URL: https://issues.apache.org/jira/browse/SOLR-9606
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0)
>
>
> I'd like to remove GeoHashField from Solr -- 7.0.  I see no use-case for it 
> any more.  And it's presence is distracting from spatial fields you *should* 
> be using.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9780) Solr managed schema should allow specifying the uniqueKey.

2016-11-16 Thread David Smiley (JIRA)
David Smiley created SOLR-9780:
--

 Summary: Solr managed schema should allow specifying the uniqueKey.
 Key: SOLR-9780
 URL: https://issues.apache.org/jira/browse/SOLR-9780
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


The uniqueKey appears to not be configurable in the REST API for modifying the 
schema.  It's kind of an odd omission; even long-time deprecated stuff (e.g. 
query parser default operator) is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 493 - Unstable!

2016-11-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/493/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithDelegationTokens

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:50128 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:50128 within 3 ms
at __randomizedtesting.SeedInfo.seed([6171793A663808C]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:182)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:111)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:98)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.waitForAllNodes(MiniSolrCloudCluster.java:242)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:236)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:121)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.startup(TestSolrCloudWithDelegationTokens.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:50128 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:233)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:174)
... 31 more


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithDelegationTokens

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([6171793A663808C]:0)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.shutdown(TestSolrCloudWithDelegationTokens.java:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 

[jira] [Commented] (SOLR-9779) Basic auth in not supported in Streaming Expressions

2016-11-16 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671728#comment-15671728
 ] 

Kevin Risden commented on SOLR-9779:


Would be nice to get this into Solr 6.4 but we will have to see. I'll see if I 
can work on it a bit.

> Basic auth in not supported in Streaming Expressions
> 
>
> Key: SOLR-9779
> URL: https://issues.apache.org/jira/browse/SOLR-9779
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, security
>Affects Versions: 6.0
>Reporter: Sandeep Mukherjee
>  Labels: features, security
> Fix For: 6.4
>
>
> I'm creating a StreamFactory object like the following code:
> {code}
> new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
> .withFunctionName("gatherNodes", GatherNodesStream.class);
> {code}
> However once I create the StreamFactory there is no way provided to set the 
> CloudSolrClient object which can be used to set Basic Auth headers.
> In StreamContext object there is a way to set the SolrClientCache object 
> which keep reference to all the CloudSolrClient where I can set a reference 
> to HttpClient which sets the Basic Auth header. However the problem is, 
> inside the SolrClientCache there is no way to set your own version of 
> CloudSolrClient with BasicAuth enabled. 
> I think we should expose method in StreamContext where I can specify 
> basic-auth enabled CloudSolrClient to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9779) Basic auth in not supported in Streaming Expressions

2016-11-16 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9779:
---
Issue Type: Improvement  (was: Task)

> Basic auth in not supported in Streaming Expressions
> 
>
> Key: SOLR-9779
> URL: https://issues.apache.org/jira/browse/SOLR-9779
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, security
>Affects Versions: 6.0
>Reporter: Sandeep Mukherjee
>  Labels: features, security
> Fix For: 6.4
>
>
> I'm creating a StreamFactory object like the following code:
> {code}
> new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
> .withFunctionName("gatherNodes", GatherNodesStream.class);
> {code}
> However once I create the StreamFactory there is no way provided to set the 
> CloudSolrClient object which can be used to set Basic Auth headers.
> In StreamContext object there is a way to set the SolrClientCache object 
> which keep reference to all the CloudSolrClient where I can set a reference 
> to HttpClient which sets the Basic Auth header. However the problem is, 
> inside the SolrClientCache there is no way to set your own version of 
> CloudSolrClient with BasicAuth enabled. 
> I think we should expose method in StreamContext where I can specify 
> basic-auth enabled CloudSolrClient to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9779) Basic auth in not supported in Streaming Expressions

2016-11-16 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9779:
---
Priority: Major  (was: Critical)

> Basic auth in not supported in Streaming Expressions
> 
>
> Key: SOLR-9779
> URL: https://issues.apache.org/jira/browse/SOLR-9779
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, security
>Affects Versions: 6.0
>Reporter: Sandeep Mukherjee
>  Labels: features, security
> Fix For: 6.4
>
>
> I'm creating a StreamFactory object like the following code:
> {code}
> new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
> .withFunctionName("gatherNodes", GatherNodesStream.class);
> {code}
> However once I create the StreamFactory there is no way provided to set the 
> CloudSolrClient object which can be used to set Basic Auth headers.
> In StreamContext object there is a way to set the SolrClientCache object 
> which keep reference to all the CloudSolrClient where I can set a reference 
> to HttpClient which sets the Basic Auth header. However the problem is, 
> inside the SolrClientCache there is no way to set your own version of 
> CloudSolrClient with BasicAuth enabled. 
> I think we should expose method in StreamContext where I can specify 
> basic-auth enabled CloudSolrClient to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9779) Basic auth in not supported in Streaming Expressions

2016-11-16 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9779:
---
Description: 
I'm creating a StreamFactory object like the following code:

{code}
new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
.withFunctionName("gatherNodes", GatherNodesStream.class);
{code}

However once I create the StreamFactory there is no way provided to set the 
CloudSolrClient object which can be used to set Basic Auth headers.

In StreamContext object there is a way to set the SolrClientCache object which 
keep reference to all the CloudSolrClient where I can set a reference to 
HttpClient which sets the Basic Auth header. However the problem is, inside the 
SolrClientCache there is no way to set your own version of CloudSolrClient with 
BasicAuth enabled. 

I think we should expose method in StreamContext where I can specify basic-auth 
enabled CloudSolrClient to use.

  was:
I'm creating a StreamFactory object like the following code:

new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
.withFunctionName("gatherNodes", GatherNodesStream.class);

However once I create the StreamFactory there is no way provided to set the 
CloudSolrClient object which can be used to set Basic Auth headers.

In StreamContext object there is a way to set the SolrClientCache object which 
keep reference to all the CloudSolrClient where I can set a reference to 
HttpClient which sets the Basic Auth header. However the problem is, inside the 
SolrClientCache there is no way to set your own version of CloudSolrClient with 
BasicAuth enabled. 

I think we should expose method in StreamContext where I can specify basic-auth 
enabled CloudSolrClient to use.


> Basic auth in not supported in Streaming Expressions
> 
>
> Key: SOLR-9779
> URL: https://issues.apache.org/jira/browse/SOLR-9779
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, security
>Affects Versions: 6.0
>Reporter: Sandeep Mukherjee
>Priority: Critical
>  Labels: features, security
> Fix For: 6.4
>
>
> I'm creating a StreamFactory object like the following code:
> {code}
> new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
> .withFunctionName("gatherNodes", GatherNodesStream.class);
> {code}
> However once I create the StreamFactory there is no way provided to set the 
> CloudSolrClient object which can be used to set Basic Auth headers.
> In StreamContext object there is a way to set the SolrClientCache object 
> which keep reference to all the CloudSolrClient where I can set a reference 
> to HttpClient which sets the Basic Auth header. However the problem is, 
> inside the SolrClientCache there is no way to set your own version of 
> CloudSolrClient with BasicAuth enabled. 
> I think we should expose method in StreamContext where I can specify 
> basic-auth enabled CloudSolrClient to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9779) Basic auth in not supported in Streaming Expressions

2016-11-16 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9779:
---
Fix Version/s: 6.4

> Basic auth in not supported in Streaming Expressions
> 
>
> Key: SOLR-9779
> URL: https://issues.apache.org/jira/browse/SOLR-9779
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, security
>Affects Versions: 6.0
>Reporter: Sandeep Mukherjee
>  Labels: features, security
> Fix For: 6.4
>
>
> I'm creating a StreamFactory object like the following code:
> {code}
> new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
> .withFunctionName("gatherNodes", GatherNodesStream.class);
> {code}
> However once I create the StreamFactory there is no way provided to set the 
> CloudSolrClient object which can be used to set Basic Auth headers.
> In StreamContext object there is a way to set the SolrClientCache object 
> which keep reference to all the CloudSolrClient where I can set a reference 
> to HttpClient which sets the Basic Auth header. However the problem is, 
> inside the SolrClientCache there is no way to set your own version of 
> CloudSolrClient with BasicAuth enabled. 
> I think we should expose method in StreamContext where I can specify 
> basic-auth enabled CloudSolrClient to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8213) SolrJ JDBC support basic authentication

2016-11-16 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671725#comment-15671725
 ] 

Kevin Risden commented on SOLR-8213:


Linking to SOLR-9779 since it is most likely need for this to work correctly.

> SolrJ JDBC support basic authentication
> ---
>
> Key: SOLR-8213
> URL: https://issues.apache.org/jira/browse/SOLR-8213
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
> Attachments: SOLR-8213.patch, add_401_httpstatus_code_check.patch, 
> add_basic_authentication_authorization_streaming.patch
>
>
> SolrJ JDBC doesn't support authentication where as Solr supports Basic and 
> Kerberos authentication currently. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9779) Basic auth in not supported in Streaming Expressions

2016-11-16 Thread Sandeep Mukherjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Mukherjee updated SOLR-9779:

Description: 
I'm creating a StreamFactory object like the following code:

new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
.withFunctionName("gatherNodes", GatherNodesStream.class);

However once I create the StreamFactory there is no way provided to set the 
CloudSolrClient object which can be used to set Basic Auth headers.

In StreamContext object there is a way to set the SolrClientCache object which 
keep reference to all the CloudSolrClient where I can set a reference to 
HttpClient which sets the Basic Auth header. However the problem is, inside the 
SolrClientCache there is no way to set your own version of CloudSolrClient with 
BasicAuth enabled. 

I think we should expose method in StreamContext where I can specify basic-auth 
enabled CloudSolrClient to use.

  was:
I'm creating a StreamFactory object like the following code:

new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
.withFunctionName("gatherNodes", GatherNodesStream.class);

However once I create the StreamFactory there is way to set the CloudSolrClient 
object which can be used to set Basic Auth headers.

In StreamContext object there is a way to set the SolrClientCache object which 
keep reference to all the CloudSolrClient where I can set a reference to 
HttpClient which sets the Basic Auth header. However the problem is, inside the 
SolrClientCache there is no way to set your own version of CloudSolrClient with 
BasicAuth enabled. 

I think we should expose method in StreamContext where I can specify basic-auth 
enabled CloudSolrClient to use.


> Basic auth in not supported in Streaming Expressions
> 
>
> Key: SOLR-9779
> URL: https://issues.apache.org/jira/browse/SOLR-9779
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, security
>Affects Versions: 6.0
>Reporter: Sandeep Mukherjee
>Priority: Critical
>  Labels: features, security
>
> I'm creating a StreamFactory object like the following code:
> new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
> .withFunctionName("gatherNodes", GatherNodesStream.class);
> However once I create the StreamFactory there is no way provided to set the 
> CloudSolrClient object which can be used to set Basic Auth headers.
> In StreamContext object there is a way to set the SolrClientCache object 
> which keep reference to all the CloudSolrClient where I can set a reference 
> to HttpClient which sets the Basic Auth header. However the problem is, 
> inside the SolrClientCache there is no way to set your own version of 
> CloudSolrClient with BasicAuth enabled. 
> I think we should expose method in StreamContext where I can specify 
> basic-auth enabled CloudSolrClient to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9606) Remove GeoHashField from Solr

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671715#comment-15671715
 ] 

ASF subversion and git services commented on SOLR-9606:
---

Commit e402a304bf97ead8c2a7f00a745e837fe0c6d449 in lucene-solr's branch 
refs/heads/master from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e402a30 ]

SOLR-9606: Change hard-coded keysize from 512 to 1024


> Remove GeoHashField from Solr
> -
>
> Key: SOLR-9606
> URL: https://issues.apache.org/jira/browse/SOLR-9606
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0)
>
>
> I'd like to remove GeoHashField from Solr -- 7.0.  I see no use-case for it 
> any more.  And it's presence is distracting from spatial fields you *should* 
> be using.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9609) Change hard-coded keysize from 512 to 1024

2016-11-16 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-9609:
-
Attachment: SOLR-9609.patch

Patch I'm committing. NOTE: the larger patch from a couple of days ago has 
extracting this from security.json should we want to pursue that at some point.

> Change hard-coded keysize from 512 to 1024
> --
>
> Key: SOLR-9609
> URL: https://issues.apache.org/jira/browse/SOLR-9609
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jeremy Martini
>Assignee: Erick Erickson
> Attachments: SOLR-9609.patch, SOLR-9609.patch, SOLR-9609.patch, 
> SOLR-9609.patch, solr.log
>
>
> In order to configure our dataSource without requiring a plaintext password 
> in the configuration file, we extended JdbcDataSource to create our own 
> custom implementation. Our dataSource config now looks something like this:
> {code:xml}
>  url="jdbc:oracle:thin:@db-host-machine:1521:tst1" user="testuser" 
> password="{ENC}{1.1}1ePOfWcbOIU056gKiLTrLw=="/>
> {code}
> We are using the RSA JSAFE Crypto-J libraries for encrypting/decrypting the 
> password. However, this seems to cause an issue when we try use Solr in a 
> Cloud Configuration (using Zookeeper). The error is "Strong key gen and 
> multiprime gen require at least 1024-bit keysize." Full log attached.
> This seems to be due to the hard-coded value of 512 in the 
> org.apache.solr.util.CryptoKeys$RSAKeyPair class:
> {code:java}
> public RSAKeyPair() {
>   KeyPairGenerator keyGen = null;
>   try {
> keyGen = KeyPairGenerator.getInstance("RSA");
>   } catch (NoSuchAlgorithmException e) {
> throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
>   }
>   keyGen.initialize(512);
> {code}
> I pulled down the Solr code, changed the hard-coded value to 1024, rebuilt 
> it, and now everything seems to work great.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9779) Basic auth in not supported in Streaming Expressions

2016-11-16 Thread Sandeep Mukherjee (JIRA)
Sandeep Mukherjee created SOLR-9779:
---

 Summary: Basic auth in not supported in Streaming Expressions
 Key: SOLR-9779
 URL: https://issues.apache.org/jira/browse/SOLR-9779
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: clients - java, security
Affects Versions: 6.0
Reporter: Sandeep Mukherjee
Priority: Critical


I'm creating a StreamFactory object like the following code:

new StreamFactory().withDefaultZkHost(solrConfig.getConnectString())
.withFunctionName("gatherNodes", GatherNodesStream.class);

However once I create the StreamFactory there is way to set the CloudSolrClient 
object which can be used to set Basic Auth headers.

In StreamContext object there is a way to set the SolrClientCache object which 
keep reference to all the CloudSolrClient where I can set a reference to 
HttpClient which sets the Basic Auth header. However the problem is, inside the 
SolrClientCache there is no way to set your own version of CloudSolrClient with 
BasicAuth enabled. 

I think we should expose method in StreamContext where I can specify basic-auth 
enabled CloudSolrClient to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2016-11-16 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-6246:
-
Attachment: SOLR-6246-test.patch

Modernized version of [~varunthacker]'s test patch.

This test now succeeds when I run it on master patched with LUCENE-7546.

There is no testing of reloading while build is underway though.

> Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
> 
>
> Key: SOLR-6246
> URL: https://issues.apache.org/jira/browse/SOLR-6246
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 4.8, 4.8.1, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4
>Reporter: Varun Thacker
> Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
> SOLR-6246-test.patch, SOLR-6246.patch
>
>
> LUCENE-5477 - added near-real-time suggest building to 
> AnalyzingInfixSuggester. One of the changes that went in was a writer is 
> persisted now to support real time updates via the add() and update() methods.
> When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
> is created. When trying to create a new writer on the same Directory a lock 
> cannot be obtained and Solr fails to reload the core.
> Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
> pass along the original message.
> I am not sure what should be the approach to fix it. Should we have a 
> reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2016-11-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671575#comment-15671575
 ] 

Steve Rowe edited comment on SOLR-6246 at 11/16/16 8:46 PM:


Modernized version of [~varunthacker]'s test patch.

This test now succeeds when I run it on master patched with LUCENE-7564.

There is no testing of reloading while build is underway though.


was (Author: steve_rowe):
Modernized version of [~varunthacker]'s test patch.

This test now succeeds when I run it on master patched with LUCENE-7546.

There is no testing of reloading while build is underway though.

> Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
> 
>
> Key: SOLR-6246
> URL: https://issues.apache.org/jira/browse/SOLR-6246
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 4.8, 4.8.1, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4
>Reporter: Varun Thacker
> Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
> SOLR-6246-test.patch, SOLR-6246.patch
>
>
> LUCENE-5477 - added near-real-time suggest building to 
> AnalyzingInfixSuggester. One of the changes that went in was a writer is 
> persisted now to support real time updates via the add() and update() methods.
> When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
> is created. When trying to create a new writer on the same Directory a lock 
> cannot be obtained and Solr fails to reload the core.
> Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
> pass along the original message.
> I am not sure what should be the approach to fix it. Should we have a 
> reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7466) add axiomatic similarity

2016-11-16 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671519#comment-15671519
 ] 

Tommaso Teofili edited comment on LUCENE-7466 at 11/16/16 8:38 PM:
---

when running 'ant clean test' under lucene/core the only error I see is in 
{{TestAxiomaticSimilarity#testIllegalQL}} which fails because the test has a 
wrong String in the _expected.getMessage().contains("...")_ check (note also 
that _testSaneNormValues_ uses {{BM25Similarity}}, I have locally changed it to 
{{AxiomaticF2EXP}}).
Other than that it seems the {{TestAxiomaticSimilarity}} actually tests only 
{{AxiomaticF2EXP}}, shouldn't it also test the other {{Axiomatic}} extensions? 

You can check the different test options on the wiki [Running 
Tests|https://wiki.apache.org/lucene-java/RunningTests]


was (Author: teofili):
when running 'ant clean test' under lucene/core the only error I see is in 
{{TestAxiomaticSimilarity#testIllegalQL}} which fails because the test has a 
wrong String in the _ expected.getMessage().contains("...")_ check (note also 
that _testSaneNormValues_ uses {{BM25Similarity}}, I have locally changed it to 
{{AxiomaticF2EXP}}).
Other than that it seems the {{TestAxiomaticSimilarity}} actually tests only 
{{AxiomaticF2EXP}}, shouldn't it also test the other {{Axiomatic}} extensions? 

You can check the different test options on the wiki [Running 
Tests|https://wiki.apache.org/lucene-java/RunningTests]

> add axiomatic similarity 
> -
>
> Key: LUCENE-7466
> URL: https://issues.apache.org/jira/browse/LUCENE-7466
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Peilin Yang
>Assignee: Tommaso Teofili
>  Labels: patch
>
> Add axiomatic similarity approaches to the similarity family.
> More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and 
> https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf
> There are in total six similarity models. All of them are based on BM25, 
> Pivoted Document Length Normalization or Language Model with Dirichlet prior. 
> We think it is worthy to add the models as part of Lucene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7466) add axiomatic similarity

2016-11-16 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671519#comment-15671519
 ] 

Tommaso Teofili edited comment on LUCENE-7466 at 11/16/16 8:38 PM:
---

when running 'ant clean test' under lucene/core the only error I see is in 
{{TestAxiomaticSimilarity#testIllegalQL}} which fails because the test has a 
wrong String in the _ expected.getMessage().contains("...")_ check (note also 
that _testSaneNormValues_ uses {{BM25Similarity}}, I have locally changed it to 
{{AxiomaticF2EXP}}).
Other than that it seems the {{TestAxiomaticSimilarity}} actually tests only 
{{AxiomaticF2EXP}}, shouldn't it also test the other {{Axiomatic}} extensions? 

You can check the different test options on the wiki [Running 
Tests|https://wiki.apache.org/lucene-java/RunningTests]


was (Author: teofili):
when running 'ant clean test' under lucene/core the only error I see is in 
{{TestAxiomaticSimilarity#testIllegalQL}} (note that _testSaneNormValues_ uses 
{{BM25Similarity}}, I have locally changed it to {{AxiomaticF2EXP}}).
Other than that it seems the {{TestAxiomaticSimilarity}} actually tests only 
{{AxiomaticF2EXP}}, shouldn't it also test the other {{Axiomatic}} extensions? 

You can check the different test options on the wiki [Running 
Tests|https://wiki.apache.org/lucene-java/RunningTests]

> add axiomatic similarity 
> -
>
> Key: LUCENE-7466
> URL: https://issues.apache.org/jira/browse/LUCENE-7466
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Peilin Yang
>Assignee: Tommaso Teofili
>  Labels: patch
>
> Add axiomatic similarity approaches to the similarity family.
> More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and 
> https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf
> There are in total six similarity models. All of them are based on BM25, 
> Pivoted Document Length Normalization or Language Model with Dirichlet prior. 
> We think it is worthy to add the models as part of Lucene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7466) add axiomatic similarity

2016-11-16 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671519#comment-15671519
 ] 

Tommaso Teofili edited comment on LUCENE-7466 at 11/16/16 8:29 PM:
---

when running 'ant clean test' under lucene/core the only error I see is in 
{{TestAxiomaticSimilarity#testIllegalQL}} (note that _testSaneNormValues_ uses 
{{BM25Similarity}}, I have locally changed it to {{AxiomaticF2EXP}}).
Other than that it seems the {{TestAxiomaticSimilarity}} actually tests only 
{{AxiomaticF2EXP}}, shouldn't it also test the other {{Axiomatic}} extensions? 

You can check the different test options on the wiki [Running 
Tests|https://wiki.apache.org/lucene-java/RunningTests]


was (Author: teofili):
when running 'ant clean test' under lucene/core the only error I see is in 
{{TestAxiomaticSimilarity#testIllegalQL}} (note that _testSaneNormValues_ uses 
{{BM25Similarity}}, I have locally changed it to {{AxiomaticF2EXP}}).
Other than that it seems the {{TestAxiomaticSimilarity}} actually tests only 
{{AxiomaticF2EXP}}, shouldn't it also test the other {{Axiomatic}} extensions? 

> add axiomatic similarity 
> -
>
> Key: LUCENE-7466
> URL: https://issues.apache.org/jira/browse/LUCENE-7466
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Peilin Yang
>Assignee: Tommaso Teofili
>  Labels: patch
>
> Add axiomatic similarity approaches to the similarity family.
> More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and 
> https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf
> There are in total six similarity models. All of them are based on BM25, 
> Pivoted Document Length Normalization or Language Model with Dirichlet prior. 
> We think it is worthy to add the models as part of Lucene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7466) add axiomatic similarity

2016-11-16 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671519#comment-15671519
 ] 

Tommaso Teofili commented on LUCENE-7466:
-

when running 'ant clean test' under lucene/core the only error I see is in 
{{TestAxiomaticSimilarity#testIllegalQL}} (note that _testSaneNormValues_ uses 
{{BM25Similarity}}, I have locally changed it to {{AxiomaticF2EXP}}).
Other than that it seems the {{TestAxiomaticSimilarity}} actually tests only 
{{AxiomaticF2EXP}}, shouldn't it also test the other {{Axiomatic}} extensions? 

> add axiomatic similarity 
> -
>
> Key: LUCENE-7466
> URL: https://issues.apache.org/jira/browse/LUCENE-7466
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Peilin Yang
>Assignee: Tommaso Teofili
>  Labels: patch
>
> Add axiomatic similarity approaches to the similarity family.
> More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and 
> https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf
> There are in total six similarity models. All of them are based on BM25, 
> Pivoted Document Length Normalization or Language Model with Dirichlet prior. 
> We think it is worthy to add the models as part of Lucene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6246) Core fails to reload when AnalyzingInfixSuggester is used as a Suggester

2016-11-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671489#comment-15671489
 ] 

Steve Rowe commented on SOLR-6246:
--

bq. One suggestion that might minimize the impact: close the writer after build.

I've opened LUCENE-7564 to do this.

> Core fails to reload when AnalyzingInfixSuggester is used as a Suggester
> 
>
> Key: SOLR-6246
> URL: https://issues.apache.org/jira/browse/SOLR-6246
> Project: Solr
>  Issue Type: Sub-task
>  Components: SearchComponents - other
>Affects Versions: 4.8, 4.8.1, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4
>Reporter: Varun Thacker
> Attachments: SOLR-6246-test.patch, SOLR-6246-test.patch, 
> SOLR-6246.patch
>
>
> LUCENE-5477 - added near-real-time suggest building to 
> AnalyzingInfixSuggester. One of the changes that went in was a writer is 
> persisted now to support real time updates via the add() and update() methods.
> When we call Solr's reload command, a new instance of AnalyzingInfixSuggester 
> is created. When trying to create a new writer on the same Directory a lock 
> cannot be obtained and Solr fails to reload the core.
> Also when AnalyzingInfixLookupFactory throws a RuntimeException we should 
> pass along the original message.
> I am not sure what should be the approach to fix it. Should we have a 
> reloadHook where we close the writer?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9609) Change hard-coded keysize from 512 to 1024

2016-11-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671460#comment-15671460
 ] 

Erick Erickson commented on SOLR-9609:
--

OK, I'm going to just hard-code it to 1024 and be done with it. This is taking 
far too long for something that only one person has found so far. 

I'll leave a comment in the code that if we have to revisit it we should see 
the discussion here.


> Change hard-coded keysize from 512 to 1024
> --
>
> Key: SOLR-9609
> URL: https://issues.apache.org/jira/browse/SOLR-9609
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jeremy Martini
>Assignee: Erick Erickson
> Attachments: SOLR-9609.patch, SOLR-9609.patch, SOLR-9609.patch, 
> solr.log
>
>
> In order to configure our dataSource without requiring a plaintext password 
> in the configuration file, we extended JdbcDataSource to create our own 
> custom implementation. Our dataSource config now looks something like this:
> {code:xml}
>  url="jdbc:oracle:thin:@db-host-machine:1521:tst1" user="testuser" 
> password="{ENC}{1.1}1ePOfWcbOIU056gKiLTrLw=="/>
> {code}
> We are using the RSA JSAFE Crypto-J libraries for encrypting/decrypting the 
> password. However, this seems to cause an issue when we try use Solr in a 
> Cloud Configuration (using Zookeeper). The error is "Strong key gen and 
> multiprime gen require at least 1024-bit keysize." Full log attached.
> This seems to be due to the hard-coded value of 512 in the 
> org.apache.solr.util.CryptoKeys$RSAKeyPair class:
> {code:java}
> public RSAKeyPair() {
>   KeyPairGenerator keyGen = null;
>   try {
> keyGen = KeyPairGenerator.getInstance("RSA");
>   } catch (NoSuchAlgorithmException e) {
> throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
>   }
>   keyGen.initialize(512);
> {code}
> I pulled down the Solr code, changed the hard-coded value to 1024, rebuilt 
> it, and now everything seems to work great.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7564) AnalyzingInfixSuggester should close its IndexWriter by default at the end of build()

2016-11-16 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7564:
---
Attachment: LUCENE-7564.patch

Patch with a test improvement: testing that SearcherManager's IndexWriter 
reference is closed after build when closeIndexWriterOnBuild=true.

I think this is ready.

[~mikemccand], if you have time, I'd appreciate a review.

> AnalyzingInfixSuggester should close its IndexWriter by default at the end of 
> build()
> -
>
> Key: LUCENE-7564
> URL: https://issues.apache.org/jira/browse/LUCENE-7564
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
> Attachments: LUCENE-7564.patch, LUCENE-7564.patch
>
>
> From SOLR-6246, where AnalyzingInfixSuggester's write lock on its index is 
> causing trouble when reloading a Solr core:
> [~gsingers] wrote:
> bq. One suggestion that might minimize the impact: close the writer after 
> build
> [~varunthacker] wrote:
> {quote}
> This is what I am thinking -
> Create a Lucene issue in which {{AnalyzingInfixSuggester#build}} closes the 
> writer by default at the end.
> The {{add}} and {{update}} methods call {{ensureOpen}} and those who do 
> frequent real time updates directly via lucene won't see any slowdowns.
> [~mikemccand] - Would this approach have any major drawback from Lucene's 
> perspective? Else I can go ahead an tackle this in a Lucene issue
> {quote}
> [~mikemccand] wrote:
> bq. Fixing {{AnalyzingInfixSuggester}} to close the writer at the end of 
> build seems reasonable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9774) Delta indexing with child documents with help of cacheImpl="SortedMapBackedCache"

2016-11-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671391#comment-15671391
 ] 

Erick Erickson commented on SOLR-9774:
--

This _looks_ like a usage question at first glance, it would be best if you 
asked about this on the Solr user's list, that has a much wider circulation and 
you'd likely get answers much more quickly.

Please see "Mailng Lists" here: http://lucene.apache.org/solr/resources.html

> Delta indexing with child documents with help of 
> cacheImpl="SortedMapBackedCache"
> -
>
> Key: SOLR-9774
> URL: https://issues.apache.org/jira/browse/SOLR-9774
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler, Data-driven Schema
>Affects Versions: 6.1
>Reporter: Aniket Khare
>  Labels: DIH, solr
>
> Hi,
> I am using solr DIH for indexing the Parent-Child relation data and using 
> cacheImpl="SortedMapBackedCache".
> For Full data indexinf I am using command clean="true" and for delta I am 
> using command full-import and clean="false".
> So the same queries are being executed for fulland delta and indexing working 
> properly.
> The issue which we are facing is where for a perticuler parent document, 
> there not a single child document and we are adding new child document.
> Following are the steps to reproduce the issue.
> 1. Add Child document to an existing parent document which is not having 
> empty child document.
> 2. Once the child document is added with delta indexing, try to modify the 
> parent document and run delta indexing again
> 3. After the delta indexing is completed, I can see the modified child 
> documents showing in Solr DIH page in debug mode. But the it is not getting 
> updated in Solr collection.
> I am using data config as below as below.
>   
> 
>   
>   
>   
>  cacheKey="id" cacheLookup="Parent.id" processor="SqlEntityProcessor" 
> cacheImpl="SortedMapBackedCache">
> 
> 
>   
> cacheKey="PID" cacheLookup="Parent.id" processor="SqlEntityProcessor" 
> cacheImpl="SortedMapBackedCache" child="true">
> 
>   
>   
> 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7398) Nested Span Queries are buggy

2016-11-16 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671305#comment-15671305
 ] 

Paul Elschot commented on LUCENE-7398:
--

 bq. is this latest patch ready to be committed, or are there still known 
problems?

Both actually, assuming that master has not had a conflicting update since.
To completely solve this backtracking is needed, and the patch does not provide 
that.

To allow collecting/payloads easily, I'd rather accept the limitations/bugs of 
the current lazy implementation.
As a minimum a reference to this issue could be added to the javadocs of the 
(un)ordered near spans.

AFAIK:
- a complete solution that can be made with lazy iteration is a span near query 
that has two subqueries
and that only checks the span starting positions,
- for subqueries that are terms or that do not vary in length, completeness for 
two subqueries is already there.

In case there is interest in span near queries that only use starting 
positions, well, that should be easy.




> Nested Span Queries are buggy
> -
>
> Key: LUCENE-7398
> URL: https://issues.apache.org/jira/browse/LUCENE-7398
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5, 6.x
>Reporter: Christoph Goller
>Assignee: Alan Woodward
>Priority: Critical
> Attachments: LUCENE-7398-20160814.patch, LUCENE-7398-20160924.patch, 
> LUCENE-7398-20160925.patch, LUCENE-7398.patch, LUCENE-7398.patch, 
> LUCENE-7398.patch, TestSpanCollection.java
>
>
> Example for a nested SpanQuery that is not working:
> Document: Human Genome Organization , HUGO , is trying to coordinate gene 
> mapping research worldwide.
> Query: spanNear([body:coordinate, spanOr([spanNear([body:gene, body:mapping], 
> 0, true), body:gene]), body:research], 0, true)
> The query should match "coordinate gene mapping research" as well as 
> "coordinate gene research". It does not match  "coordinate gene mapping 
> research" with Lucene 5.5 or 6.1, it did however match with Lucene 4.10.4. It 
> probably stopped working with the changes on SpanQueries in 5.3. I will 
> attach a unit test that shows the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+140) - Build # 2196 - Still Unstable!

2016-11-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2196/
Java: 32bit/jdk-9-ea+140 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:45655/solr/managed-preanalyzed, 
https://127.0.0.1:33013/solr/managed-preanalyzed]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:45655/solr/managed-preanalyzed, 
https://127.0.0.1:33013/solr/managed-preanalyzed]
at 
__randomizedtesting.SeedInfo.seed([7C2933234486A405:D43C81C8988037F3]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1292)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1062)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1004)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.addField(PreAnalyzedFieldManagedSchemaCloudTest.java:61)
at 
org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields(PreAnalyzedFieldManagedSchemaCloudTest.java:52)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9778) Make luceneMatchVersion handling easy/automatic

2016-11-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671217#comment-15671217
 ] 

David Smiley commented on SOLR-9778:


bq. Maybe put a new file into conf?

If it were initialized at the time a configSet is first used, then I think that 
might be okay.  It could interfere with configSet sharing and read-only 
configSets though.  My main concern with conf for this purpose is semantics; I 
don't view this as configuration; it's metadata for the index.  You shouldn't 
go in and change this file; you would instead edit the config to override it.  
But few would have a desire to do that, I think.

> Make luceneMatchVersion handling easy/automatic
> ---
>
> Key: SOLR-9778
> URL: https://issues.apache.org/jira/browse/SOLR-9778
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>
> I was thinking about luceneMatchVersion and how it's annoying to both explain 
> and get right, and maintain.  I think there's a way in Solr we can do this 
> way better:
> When an index is initialized, record the luceneMatchVersion in effect into a 
> file in the data directory, like luceneMatchVersion.txt.  It's a file that 
> will never be modified.
> The luceneMatchVersion in effect is the first of these that are specified:
> * {{}} in solrconfig.xml
> * data/luceneMatchVersion.txt 
> * {{org.apache.lucene.util.Version.LATEST}}
> With this approach, we can eliminate putting {{}} into 
> solrconfig.xml by default.  Most users will have no need to bother setting 
> it, even during an upgrade of either an existing index, or when they 
> re-index.  Of course there are cases where the user knows what they are doing 
> and insists on a different luceneMatchVersion, and they can specify that 
> still.
> Perhaps instead of a new file (data/luceneMatchVersion.txt), it might go into 
> core.properties.  I dunno.
> _(disclaimer: as I write this, I have no plans to work on this at the moment)_



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7564) AnalyzingInfixSuggester should close its IndexWriter by default at the end of build()

2016-11-16 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7564:
---
Attachment: LUCENE-7564.patch

Patch, adds a new constructor taking boolean param {{closeIndexWriterOnBuild}}, 
which defaults in all other constructors to {{true}}, and adds tests.


> AnalyzingInfixSuggester should close its IndexWriter by default at the end of 
> build()
> -
>
> Key: LUCENE-7564
> URL: https://issues.apache.org/jira/browse/LUCENE-7564
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
> Attachments: LUCENE-7564.patch
>
>
> From SOLR-6246, where AnalyzingInfixSuggester's write lock on its index is 
> causing trouble when reloading a Solr core:
> [~gsingers] wrote:
> bq. One suggestion that might minimize the impact: close the writer after 
> build
> [~varunthacker] wrote:
> {quote}
> This is what I am thinking -
> Create a Lucene issue in which {{AnalyzingInfixSuggester#build}} closes the 
> writer by default at the end.
> The {{add}} and {{update}} methods call {{ensureOpen}} and those who do 
> frequent real time updates directly via lucene won't see any slowdowns.
> [~mikemccand] - Would this approach have any major drawback from Lucene's 
> perspective? Else I can go ahead an tackle this in a Lucene issue
> {quote}
> [~mikemccand] wrote:
> bq. Fixing {{AnalyzingInfixSuggester}} to close the writer at the end of 
> build seems reasonable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7564) AnalyzingInfixSuggester should close its IndexWriter by default at the end of build()

2016-11-16 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-7564:
--

 Summary: AnalyzingInfixSuggester should close its IndexWriter by 
default at the end of build()
 Key: LUCENE-7564
 URL: https://issues.apache.org/jira/browse/LUCENE-7564
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Steve Rowe


>From SOLR-6246, where AnalyzingInfixSuggester's write lock on its index is 
>causing trouble when reloading a Solr core:

[~gsingers] wrote:
bq. One suggestion that might minimize the impact: close the writer after build

[~varunthacker] wrote:
{quote}
This is what I am thinking -

Create a Lucene issue in which {{AnalyzingInfixSuggester#build}} closes the 
writer by default at the end.
The {{add}} and {{update}} methods call {{ensureOpen}} and those who do 
frequent real time updates directly via lucene won't see any slowdowns.

[~mikemccand] - Would this approach have any major drawback from Lucene's 
perspective? Else I can go ahead an tackle this in a Lucene issue
{quote}

[~mikemccand] wrote:

bq. Fixing {{AnalyzingInfixSuggester}} to close the writer at the end of build 
seems reasonable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9778) Make luceneMatchVersion handling easy/automatic

2016-11-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671166#comment-15671166
 ] 

Erick Erickson commented on SOLR-9778:
--

Maybe put a new file into conf? If put in core.properties it would be easy to 
miss one.

No strong feelings here though, just FWIW.

> Make luceneMatchVersion handling easy/automatic
> ---
>
> Key: SOLR-9778
> URL: https://issues.apache.org/jira/browse/SOLR-9778
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>
> I was thinking about luceneMatchVersion and how it's annoying to both explain 
> and get right, and maintain.  I think there's a way in Solr we can do this 
> way better:
> When an index is initialized, record the luceneMatchVersion in effect into a 
> file in the data directory, like luceneMatchVersion.txt.  It's a file that 
> will never be modified.
> The luceneMatchVersion in effect is the first of these that are specified:
> * {{}} in solrconfig.xml
> * data/luceneMatchVersion.txt 
> * {{org.apache.lucene.util.Version.LATEST}}
> With this approach, we can eliminate putting {{}} into 
> solrconfig.xml by default.  Most users will have no need to bother setting 
> it, even during an upgrade of either an existing index, or when they 
> re-index.  Of course there are cases where the user knows what they are doing 
> and insists on a different luceneMatchVersion, and they can specify that 
> still.
> Perhaps instead of a new file (data/luceneMatchVersion.txt), it might go into 
> core.properties.  I dunno.
> _(disclaimer: as I write this, I have no plans to work on this at the moment)_



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7561) Add back-compat indices for index sorting

2016-11-16 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7561.

   Resolution: Fixed
Fix Version/s: 6.4
   master (7.0)

I added sorted indices to the BWC test here: 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=commit;h=774e31b6dd7184fb6d43fca83e32fcb46da32e20

> Add back-compat indices for index sorting
> -
>
> Key: LUCENE-7561
> URL: https://issues.apache.org/jira/browse/LUCENE-7561
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.4
>
>
> Index time sorting is a powerful feature making searches that are "congruent" 
> with the sort much faster, in exchange for slower indexing.  For some use 
> cases this is a great tradeoff.
> We recently promoted the feature to core (LUCENE-6766) and made it very 
> simple to use (you just call {{IndexWriterConfig.setIndexSort}}.  In 
> LUCENE-7537 we are adding support for multi-valued fields.
> I think it's important we properly test backwards compatibility and from a 
> quick look it looks like we have no sorted indices in 
> {{TestBackwardsCompatibility}} ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7540) Upgrade ICU to 58.1

2016-11-16 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7540:
---
Attachment: LUCENE-7540.patch

I attempted to upgrade to ICU 58.1 (see attached patch), and ran {{ant 
regenerate}}, but our evil {{checkRandomData}} test is tripping assertions in 
ICU's {{RuleBasedBreakIterator.java}}:

{noformat}
   [junit4]   2> ??? 16, 2016 6:56:39 ? 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[Thread-3,5,TGRP-TestICUTokenizer]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([34D64859D1A7CD98]:0)
   [junit4]   2>at 
com.ibm.icu.text.RuleBasedBreakIterator.checkDictionary(RuleBasedBreakIterator.java:544)
   [junit4]   2>at 
com.ibm.icu.text.RuleBasedBreakIterator.next(RuleBasedBreakIterator.java:428)
   [junit4]   2>at 
org.apache.lucene.analysis.icu.segmentation.BreakIteratorWrapper$RBBIWrapper.next(BreakIteratorWrapper.java:96)
   [junit4]   2>at 
org.apache.lucene.analysis.icu.segmentation.CompositeBreakIterator.next(CompositeBreakIterator.java:65)
   [junit4]   2>at 
org.apache.lucene.analysis.icu.segmentation.ICUTokenizer.incrementTokenBuffer(ICUTokenizer.java:210)
   [junit4]   2>at 
org.apache.lucene.analysis.icu.segmentation.ICUTokenizer.incrementToken(ICUTokenizer.java:104)
   [junit4]   2>at 
org.apache.lucene.analysis.icu.ICUNormalizer2Filter.incrementToken(ICUNormalizer2Filter.java:80)
   [junit4]   2>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:183)
   [junit4]   2>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:301)
   [junit4]   2>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:305)
   [junit4]   2>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:829)
   [junit4]   2>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:628)
   [junit4]   2>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.access$000(BaseTokenStreamTestCase.java:61)
   [junit4]   2>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase$AnalysisThread.run(BaseTokenStreamTestCase.java:496)
   [junit4]   2> 
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestICUTokenizer 
-Dtests.method=testRandomHugeStrings -Dtests.seed=34D64859D1A7CD98 
-Dtests.locale=ar-QA -Dtests.timezone=Africa/Bujumbura -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
{noformat}

I had previously installed icu4c 58.1 from sources, and installed it on my dev 
box so its generation tools (e.g. {{gennorm2}}) are available ... so maybe I 
messed something up in that process, or maybe this is an ICU bug?

> Upgrade ICU to 58.1
> ---
>
> Key: LUCENE-7540
> URL: https://issues.apache.org/jira/browse/LUCENE-7540
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7540.patch
>
>
> ICU is up to 58.1, but our ICU analysis components currently use 56.1, which 
> is ~1 year old by now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7466) add axiomatic similarity

2016-11-16 Thread Peilin Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670915#comment-15670915
 ] 

Peilin Yang commented on LUCENE-7466:
-

Hi [~teofili] I just added the test cases.
But when I run `ant test` it fails for some other tests.
Do you know a easier way to just test the test cases I added?

> add axiomatic similarity 
> -
>
> Key: LUCENE-7466
> URL: https://issues.apache.org/jira/browse/LUCENE-7466
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Peilin Yang
>Assignee: Tommaso Teofili
>  Labels: patch
>
> Add axiomatic similarity approaches to the similarity family.
> More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and 
> https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf
> There are in total six similarity models. All of them are based on BM25, 
> Pivoted Document Length Normalization or Language Model with Dirichlet prior. 
> We think it is worthy to add the models as part of Lucene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8213) SolrJ JDBC support basic authentication

2016-11-16 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670864#comment-15670864
 ] 

Kevin Risden commented on SOLR-8213:


Potentially related email thread: 
http://search-lucene.com/m/Solr/eHNl3vQZB16s7az1

> SolrJ JDBC support basic authentication
> ---
>
> Key: SOLR-8213
> URL: https://issues.apache.org/jira/browse/SOLR-8213
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
> Attachments: SOLR-8213.patch, add_401_httpstatus_code_check.patch, 
> add_basic_authentication_authorization_streaming.patch
>
>
> SolrJ JDBC doesn't support authentication where as Solr supports Basic and 
> Kerberos authentication currently. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7543) Make changes-to-html target an offline operation

2016-11-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670851#comment-15670851
 ] 

Steve Rowe commented on LUCENE-7543:


bq. or just dev-tools/doap/lucene.rdf and dev-tools/doap/solr.rdf (with a 
README.txt in the same dir explaining to future devs why that dir is there)

+1

> Make changes-to-html target an offline operation
> 
>
> Key: LUCENE-7543
> URL: https://issues.apache.org/jira/browse/LUCENE-7543
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>
> Currently changes-to-html pulls release dates from JIRA, and so fails when 
> JIRA is inaccessible (e.g. from behind a firewall).
> SOLR-9711 advocates adding a build sysprop to ignore JIRA connection 
> failures, but I'd rather make the operation always offline.
> In an offline discussion, [~hossman] advocated moving Lucene's and Solr's 
> {{doap.rdf}} files, which contain all of the release dates that the 
> changes-to-html now pulls from JIRA, from the CMS Subversion repository 
> (downloadable from the website at http://lucene.apache.org/core/doap.rdf and 
> http://lucene.apache.org/solr/doap.rdf) to the Lucene/Solr git repository. If 
> we did that, then the process could be entirely offline if release dates were 
> taken from the local {{doap.rdf}} files instead of downloaded from JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8994) EmbeddedSolrServer does not provide the httpMethod to the handler

2016-11-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670848#comment-15670848
 ] 

Cédric Damioli edited comment on SOLR-8994 at 11/16/16 4:10 PM:


This patch is very useful for our unit tests. Could someone review and apply it 
?
Thanks in advance.


was (Author: cedric):
This patch is very useful for our unit test. Could someone review and apply it ?
Thanks in advance.

> EmbeddedSolrServer does not provide the httpMethod to the handler
> -
>
> Key: SOLR-8994
> URL: https://issues.apache.org/jira/browse/SOLR-8994
> Project: Solr
>  Issue Type: Bug
>Reporter: Nicolas Gavalda
>  Labels: embedded
> Attachments: SOLR-8994-EmbeddedSolrServer-httpMethod.patch
>
>
> The modification URIs of the schema API don't work when using an 
> EmbeddedSolrServer: the SchemaHandler verifies that modification requests are 
> POST, and the EmbeddedSolrServer doesn't transmit this information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8994) EmbeddedSolrServer does not provide the httpMethod to the handler

2016-11-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670848#comment-15670848
 ] 

Cédric Damioli commented on SOLR-8994:
--

This patch is very useful for our unit test. Could someone review and apply it ?
Thanks in advance.

> EmbeddedSolrServer does not provide the httpMethod to the handler
> -
>
> Key: SOLR-8994
> URL: https://issues.apache.org/jira/browse/SOLR-8994
> Project: Solr
>  Issue Type: Bug
>Reporter: Nicolas Gavalda
>  Labels: embedded
> Attachments: SOLR-8994-EmbeddedSolrServer-httpMethod.patch
>
>
> The modification URIs of the schema API don't work when using an 
> EmbeddedSolrServer: the SchemaHandler verifies that modification requests are 
> POST, and the EmbeddedSolrServer doesn't transmit this information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8822) Implement DatabaseMetaDataImpl.getPrimaryKeys(String catalog, String schema, String table)

2016-11-16 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden closed SOLR-8822.
--
Resolution: Invalid
  Assignee: Kevin Risden

> Implement DatabaseMetaDataImpl.getPrimaryKeys(String catalog, String schema, 
> String table)
> --
>
> Key: SOLR-8822
> URL: https://issues.apache.org/jira/browse/SOLR-8822
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8817) SolrJ JDBC - DbVisualizer info about data types

2016-11-16 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden closed SOLR-8817.
--
Resolution: Invalid
  Assignee: Kevin Risden

SOLR-8593 should handle this. If still an issue can revisit later.

> SolrJ JDBC - DbVisualizer info about data types
> ---
>
> Key: SOLR-8817
> URL: https://issues.apache.org/jira/browse/SOLR-8817
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>
> After connection, click on Data Types tab after double clicking on connection 
> name should add info about data types



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8816) SolrJ JDBC - DbVisualizer Database Info Keywords and Functions

2016-11-16 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden closed SOLR-8816.
--
Resolution: Invalid
  Assignee: Kevin Risden

SOLR-8593 should handle this. If still an issue can revisit later.

> SolrJ JDBC - DbVisualizer Database Info Keywords and Functions
> --
>
> Key: SOLR-8816
> URL: https://issues.apache.org/jira/browse/SOLR-8816
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>
> After connection, click on Database Info tab after double clicking on 
> connection name should add add info about supported Keywords and Functions



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8815) SolrJ JDBC - DBVisualizer DB Capabilities

2016-11-16 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden closed SOLR-8815.
--
Resolution: Invalid

SOLR-8593 should handle this. If still an issue can revisit later.

> SolrJ JDBC - DBVisualizer DB Capabilities
> -
>
> Key: SOLR-8815
> URL: https://issues.apache.org/jira/browse/SOLR-8815
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>
> With DBVisualizer, after connection, click on Database Info tab after double 
> clicking on connection name ensure that DB capabilities are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7537) Add multi valued field support to index sorting

2016-11-16 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7537.

Resolution: Fixed

Thanks [~jim.ferenczi]!

> Add multi valued field support to index sorting
> ---
>
> Key: LUCENE-7537
> URL: https://issues.apache.org/jira/browse/LUCENE-7537
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Ferenczi Jim
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7537.patch, LUCENE-7537.patch, LUCENE-7537.patch, 
> LUCENE-7537.patch, LUCENE-7537.patch
>
>
> Today index sorting can be done on single valued field through the 
> NumericDocValues (for numerics) and SortedDocValues (for strings).
> I'd like to add the ability to sort on multi valued fields. Since index 
> sorting does not accept custom comparator we could just take the minimum 
> value of each document for an ascending sort and the maximum value for a 
> descending sort.
> This way we could handle all cases instead of throwing an exception during a 
> merge when we encounter a multi valued DVs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7537) Add multi valued field support to index sorting

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670746#comment-15670746
 ] 

ASF subversion and git services commented on LUCENE-7537:
-

Commit e357f957f3059add5582b9695f838794c386dcad in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e357f95 ]

LUCENE-7537: Index time sorting now supports multi-valued sorts using selectors 
(MIN, MAX, etc.)


> Add multi valued field support to index sorting
> ---
>
> Key: LUCENE-7537
> URL: https://issues.apache.org/jira/browse/LUCENE-7537
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Ferenczi Jim
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7537.patch, LUCENE-7537.patch, LUCENE-7537.patch, 
> LUCENE-7537.patch, LUCENE-7537.patch
>
>
> Today index sorting can be done on single valued field through the 
> NumericDocValues (for numerics) and SortedDocValues (for strings).
> I'd like to add the ability to sort on multi valued fields. Since index 
> sorting does not accept custom comparator we could just take the minimum 
> value of each document for an ascending sort and the maximum value for a 
> descending sort.
> This way we could handle all cases instead of throwing an exception during a 
> merge when we encounter a multi valued DVs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7537) Add multi valued field support to index sorting

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670747#comment-15670747
 ] 

ASF subversion and git services commented on LUCENE-7537:
-

Commit 64b9eefaa931b4fc8b2345e2307eff4a317e3450 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=64b9eef ]

LUCENE-7537: fix some 6.x backport issues


> Add multi valued field support to index sorting
> ---
>
> Key: LUCENE-7537
> URL: https://issues.apache.org/jira/browse/LUCENE-7537
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Reporter: Ferenczi Jim
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7537.patch, LUCENE-7537.patch, LUCENE-7537.patch, 
> LUCENE-7537.patch, LUCENE-7537.patch
>
>
> Today index sorting can be done on single valued field through the 
> NumericDocValues (for numerics) and SortedDocValues (for strings).
> I'd like to add the ability to sort on multi valued fields. Since index 
> sorting does not accept custom comparator we could just take the minimum 
> value of each document for an ascending sort and the maximum value for a 
> descending sort.
> This way we could handle all cases instead of throwing an exception during a 
> merge when we encounter a multi valued DVs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_102) - Build # 560 - Still Unstable!

2016-11-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/560/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":null},  from 
server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val' for path 'x' 
full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":null},  from server:  null
at 
__randomizedtesting.SeedInfo.seed([75FEB5434106CD92:ADB39814B6DB6832]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:535)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:232)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Updated] (SOLR-9778) Make luceneMatchVersion handling easy/automatic

2016-11-16 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-9778:
---
Summary: Make luceneMatchVersion handling easy/automatic  (was: improve 
luceneMatchVersion handling)

> Make luceneMatchVersion handling easy/automatic
> ---
>
> Key: SOLR-9778
> URL: https://issues.apache.org/jira/browse/SOLR-9778
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>
> I was thinking about luceneMatchVersion and how it's annoying to both explain 
> and get right, and maintain.  I think there's a way in Solr we can do this 
> way better:
> When an index is initialized, record the luceneMatchVersion in effect into a 
> file in the data directory, like luceneMatchVersion.txt.  It's a file that 
> will never be modified.
> The luceneMatchVersion in effect is the first of these that are specified:
> * {{}} in solrconfig.xml
> * data/luceneMatchVersion.txt 
> * {{org.apache.lucene.util.Version.LATEST}}
> With this approach, we can eliminate putting {{}} into 
> solrconfig.xml by default.  Most users will have no need to bother setting 
> it, even during an upgrade of either an existing index, or when they 
> re-index.  Of course there are cases where the user knows what they are doing 
> and insists on a different luceneMatchVersion, and they can specify that 
> still.
> Perhaps instead of a new file (data/luceneMatchVersion.txt), it might go into 
> core.properties.  I dunno.
> _(disclaimer: as I write this, I have no plans to work on this at the moment)_



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9778) improve luceneMatchVersion handling

2016-11-16 Thread David Smiley (JIRA)
David Smiley created SOLR-9778:
--

 Summary: improve luceneMatchVersion handling
 Key: SOLR-9778
 URL: https://issues.apache.org/jira/browse/SOLR-9778
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


I was thinking about luceneMatchVersion and how it's annoying to both explain 
and get right, and maintain.  I think there's a way in Solr we can do this way 
better:

When an index is initialized, record the luceneMatchVersion in effect into a 
file in the data directory, like luceneMatchVersion.txt.  It's a file that will 
never be modified.

The luceneMatchVersion in effect is the first of these that are specified:
* {{}} in solrconfig.xml
* data/luceneMatchVersion.txt 
* {{org.apache.lucene.util.Version.LATEST}}

With this approach, we can eliminate putting {{}} into 
solrconfig.xml by default.  Most users will have no need to bother setting it, 
even during an upgrade of either an existing index, or when they re-index.  Of 
course there are cases where the user knows what they are doing and insists on 
a different luceneMatchVersion, and they can specify that still.

Perhaps instead of a new file (data/luceneMatchVersion.txt), it might go into 
core.properties.  I dunno.

_(disclaimer: as I write this, I have no plans to work on this at the moment)_



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9284) The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps grow indefinitely.

2016-11-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670664#comment-15670664
 ] 

Mark Miller commented on SOLR-9284:
---

bq.It probably just matters what is running in parallel and also eating away at 
the artificial direct memory governor.

Although I suppose anything in parallel is in it's own JVM and should have it's 
own limit. Perhaps a lack of releasing direct memory somewhere then.

> The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps 
> grow indefinitely.
> ---
>
> Key: SOLR-9284
> URL: https://issues.apache.org/jira/browse/SOLR-9284
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9284.patch, SOLR-9284.patch
>
>
> https://issues.apache.org/jira/browse/SOLR-9284



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9284) The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps grow indefinitely.

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670660#comment-15670660
 ] 

ASF subversion and git services commented on SOLR-9284:
---

Commit 6962381180c7c9d26f22fb09b3b673f2a9f8ef7b in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6962381 ]

SOLR-9284: Reduce off heap cache size.


> The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps 
> grow indefinitely.
> ---
>
> Key: SOLR-9284
> URL: https://issues.apache.org/jira/browse/SOLR-9284
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9284.patch, SOLR-9284.patch
>
>
> https://issues.apache.org/jira/browse/SOLR-9284



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9284) The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps grow indefinitely.

2016-11-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670659#comment-15670659
 ] 

ASF subversion and git services commented on SOLR-9284:
---

Commit 53a0748f4345b540da598c25500f4fc402dbbf38 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=53a0748 ]

SOLR-9284: Reduce off heap cache size.


> The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps 
> grow indefinitely.
> ---
>
> Key: SOLR-9284
> URL: https://issues.apache.org/jira/browse/SOLR-9284
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9284.patch, SOLR-9284.patch
>
>
> https://issues.apache.org/jira/browse/SOLR-9284



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4533) Synonyms, created in custom filters are ignored after tokenizers.

2016-11-16 Thread Artem Lukanin (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670656#comment-15670656
 ] 

Artem Lukanin commented on SOLR-4533:
-

Sorry for not providing a test case 3 years ago, but I'm not in the SOLR world 
any more, so cannot check it. If nobody was interested in the patch, I guess it 
is useless now.

> Synonyms, created in custom filters are ignored after tokenizers.
> -
>
> Key: SOLR-4533
> URL: https://issues.apache.org/jira/browse/SOLR-4533
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Artem Lukanin
> Attachments: synonyms.patch
>
>
> If a synonym token is added in a custom filter (e.g. this one: 
> http://solr.pl/en/2013/02/04/developing-your-own-solr-filter-part-2/) and the 
> default operator is AND, the query becomes broken, because 2 tokens at the 
> same position becomes required, which is impossible. Solution: place all 
> synonyms in a separate clause and assign these tokens occur=SHOULD.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9284) The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps grow indefinitely.

2016-11-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670646#comment-15670646
 ] 

Mark Miller commented on SOLR-9284:
---

It probably just matters what is running in parallel and also eating away at 
the artificial direct memory governor.

> The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps 
> grow indefinitely.
> ---
>
> Key: SOLR-9284
> URL: https://issues.apache.org/jira/browse/SOLR-9284
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9284.patch, SOLR-9284.patch
>
>
> https://issues.apache.org/jira/browse/SOLR-9284



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 2195 - Unstable!

2016-11-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2195/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:44845","node_name":"127.0.0.1:44845_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/30)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:35779;,   
"core":"c8n_1x3_lf_shard1_replica2",   "node_name":"127.0.0.1:35779_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:44845;,   "node_name":"127.0.0.1:44845_",  
 "state":"active",   "leader":"true"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:37579;,   "node_name":"127.0.0.1:37579_",  
 "state":"down",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:44845","node_name":"127.0.0.1:44845_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/30)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:35779;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:35779_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:44845;,
  "node_name":"127.0.0.1:44845_",
  "state":"active",
  "leader":"true"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:37579;,
  "node_name":"127.0.0.1:37579_",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([223ADF509CA06930:AA6EE08A325C04C8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)

[jira] [Commented] (LUCENE-5012) Make graph-based TokenFilters easier

2016-11-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670617#comment-15670617
 ] 

David Smiley commented on LUCENE-5012:
--

Seems very promising. Is LUCENE-2450 a dependency on this issue?  There's no 
dependency JIRA issue link the first comment suggests it is.

> Make graph-based TokenFilters easier
> 
>
> Key: LUCENE-5012
> URL: https://issues.apache.org/jira/browse/LUCENE-5012
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: LUCENE-5012.patch
>
>
> SynonymFilter has two limitations today:
>   * It cannot create positions, so eg dns -> domain name service
> creates blatantly wrong highlights (SOLR-3390, LUCENE-4499 and
> others).
>   * It cannot consume a graph, so e.g. if you try to apply synonyms
> after Kuromoji tokenizer I'm not sure what will happen.
> I've thought about how to fix these issues but it's really quite
> difficult with the current PosInc/PosLen graph representation, so I'd
> like to explore an alternative approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4533) Synonyms, created in custom filters are ignored after tokenizers.

2016-11-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670581#comment-15670581
 ] 

David Smiley commented on SOLR-4533:


Is this still an issue?  A test is quite necessary to demonstrate that there's 
a problem this patch fixes.

> Synonyms, created in custom filters are ignored after tokenizers.
> -
>
> Key: SOLR-4533
> URL: https://issues.apache.org/jira/browse/SOLR-4533
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.1
>Reporter: Artem Lukanin
> Attachments: synonyms.patch
>
>
> If a synonym token is added in a custom filter (e.g. this one: 
> http://solr.pl/en/2013/02/04/developing-your-own-solr-filter-part-2/) and the 
> default operator is AND, the query becomes broken, because 2 tokens at the 
> same position becomes required, which is impossible. Solution: place all 
> synonyms in a separate clause and assign these tokens occur=SHOULD.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2450) Explore write-once attr bindings in the analysis chain

2016-11-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670481#comment-15670481
 ] 

David Smiley commented on LUCENE-2450:
--

I really like the ideas here!  It would make capture/restore cheaper.  Some 
filters like WordDelimiterFilter don't use capture/restore, I think, in the 
name of efficiency but then it only knows about some built-in attributes, not 
custom ones people add.  The heavy-weight aspect of capture/restore is my main 
beef with the current design.

> Explore write-once attr bindings in the analysis chain
> --
>
> Key: LUCENE-2450
> URL: https://issues.apache.org/jira/browse/LUCENE-2450
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Michael McCandless
>  Labels: gsoc2014
> Attachments: LUCENE-2450.patch, LUCENE-2450.patch, pipeline.py
>
>
> I'd like to propose a new means of tracking attrs through the analysis
> chain, whereby a given stage in the pipeline cannot overwrite attrs
> from stages before it (write once).  It can only write to new attrs
> (possibly w/ the same name) that future stages can see; it can never
> alter the attrs or bindings from the prior stages.
> I coded up a prototype chain in python (I'll attach), showing the
> equivalent of WhitespaceTokenizer -> StopFilter -> SynonymFilter ->
> Indexer.
> Each stage "sees" a frozen namespace of attr bindings as its input;
> these attrs are all read-only from its standpoint.  Then, it writes to
> an "output namespace", which is read/write, eg it can add new attrs,
> remove attrs from its input, change the values of attrs.  If that
> stage doesn't alter a given attr it "passes through", unchanged.
> This would be an enormous change to how attrs are managed... so this
> is very very exploratory at this point.  Once we decouple indexer from
> analysis, creating such an alternate chain should be possible -- it'd
> at least be a good test that we've decoupled enough :)
> I think the idea offers some compelling improvements over the "global
> read/write namespace" (AttrFactory) approach we have today:
>   * Injection filters can be more efficient -- they need not
> capture/restoreState at all
>   * No more need for the initial tokenizer to "clear all attrs" --
> each stage becomes responsible for clearing the attrs it "owns"
>   * You can truly stack stages (vs having to make a custom
> AttrFactory) -- eg you could make a Bocu1 stage which can stack
> onto any other stage.  It'd look up the CharTermAttr, remove it
> from its output namespace, and add a BytesRefTermAttr.
>   * Indexer should be more efficient, in that it doesn't need to
> re-get the attrs on each next() -- it gets them up front, and
> re-uses them.
> Note that in this model, the indexer itself is just another stage in
> the pipeline, so you could do some wild things like use 2 indexer
> stages (writing to different indexes, or maybe the same index but
> somehow with further processing or something).
> Also, in this approach, the analysis chain is more informed about the
> what each stage is allowed to change, up front after the chain is
> created.  EG (say) we will know that only 2 stages write to the term
> attr, and that only 1 writes posIncr/offset attrs, etc.  Not sure
> if/how this helps us... but it's more strongly typed than what we have
> today.
> I think we could use a similar chain for processing a document at the
> field level, ie, different stages could add/remove/change different
> fields in the doc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9284) The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps grow indefinitely.

2016-11-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670479#comment-15670479
 ] 

Steve Rowe commented on SOLR-9284:
--

Three more seeds, but none reproduce for me - note that all three include an 
NPE as a second Throwable, which I just noticed in the trace in my previous 
comment here:

>From [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/182]:

{noformat}
  [smoker][junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=BlockDirectoryTest -Dtests.method=testEOF 
-Dtests.seed=9A2D36FC5487E440 -Dtests.multiplier=2 -Dtests.locale=hi 
-Dtests.timezone=America/North_Dakota/Beulah -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [smoker][junit4] ERROR   1.66s J1 | BlockDirectoryTest.testEOF <<<
   [smoker][junit4]> Throwable #1: java.lang.OutOfMemoryError: Direct 
buffer memory
   [smoker][junit4]>at java.nio.Bits.reserveMemory(Bits.java:693)
   [smoker][junit4]>at 
java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
   [smoker][junit4]>at 
java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
   [smoker][junit4]>at 
org.apache.solr.store.blockcache.BlockCache.(BlockCache.java:68)
   [smoker][junit4]>at 
org.apache.solr.store.blockcache.BlockDirectoryTest.setUp(BlockDirectoryTest.java:119)
   [smoker][junit4]>at 
java.lang.Thread.run(Thread.java:745)Throwable #2: 
java.lang.NullPointerException
   [smoker][junit4]>at 
org.apache.solr.store.blockcache.BlockDirectoryTest.tearDown(BlockDirectoryTest.java:131)
{noformat}

>From my Jenkins on branch_6.x:

{noformat}
  [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=BlockDirectoryTest 
-Dtests.method=testRandomAccessWritesLargeCache -Dtests.seed=79BD96B775734799 
-Dtests.slow=true -Dtests.locale=ar-TN -Dtests.timezone=Europe/Lisbon 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
  [junit4] ERROR   1.95s J7  | 
BlockDirectoryTest.testRandomAccessWritesLargeCache <<<
  [junit4]> Throwable #1: java.lang.OutOfMemoryError: Direct buffer memory
  [junit4]> at java.nio.Bits.reserveMemory(Bits.java:693)
  [junit4]> at 
java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
  [junit4]> at 
java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
  [junit4]> at 
org.apache.solr.store.blockcache.BlockCache.(BlockCache.java:68)
  [junit4]> at 
org.apache.solr.store.blockcache.BlockDirectoryTest.setUp(BlockDirectoryTest.java:119)
  [junit4]> at java.lang.Thread.run(Thread.java:745)Throwable #2: 
java.lang.NullPointerException
  [junit4]> at 
org.apache.solr.store.blockcache.BlockDirectoryTest.tearDown(BlockDirectoryTest.java:131)
{noformat}

And from my Jenkins on master:

{noformat}
  [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=BlockDirectoryTest 
-Dtests.method=testRandomAccessWrites -Dtests.seed=39545A949FB2DD31 
-Dtests.slow=true -Dtests.locale=sr-ME -Dtests.timezone=America/Indiana/Vevay 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
  [junit4] ERROR   0.86s J7  | BlockDirectoryTest.testRandomAccessWrites <<<
  [junit4]> Throwable #1: java.lang.OutOfMemoryError: Direct buffer memory
  [junit4]> at java.nio.Bits.reserveMemory(Bits.java:693)
  [junit4]> at 
java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
  [junit4]> at 
java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
  [junit4]> at 
org.apache.solr.store.blockcache.BlockCache.(BlockCache.java:68)
  [junit4]> at 
org.apache.solr.store.blockcache.BlockDirectoryTest.setUp(BlockDirectoryTest.java:119)
  [junit4]> at java.lang.Thread.run(Thread.java:745)Throwable #2: 
java.lang.NullPointerException
  [junit4]> at 
org.apache.solr.store.blockcache.BlockDirectoryTest.tearDown(BlockDirectoryTest.java:131)
{noformat}

> The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps 
> grow indefinitely.
> ---
>
> Key: SOLR-9284
> URL: https://issues.apache.org/jira/browse/SOLR-9284
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9284.patch, SOLR-9284.patch
>
>
> https://issues.apache.org/jira/browse/SOLR-9284



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7398) Nested Span Queries are buggy

2016-11-16 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670478#comment-15670478
 ] 

Michael McCandless commented on LUCENE-7398:


I can't quite tell from the comments/iterations here: is this latest patch 
ready to be committed, or are there still known problems?

Alternatively, should we maybe revert the lazy iteration change (LUCENE-6537) 
if it is the root cause that broke previous cases?

> Nested Span Queries are buggy
> -
>
> Key: LUCENE-7398
> URL: https://issues.apache.org/jira/browse/LUCENE-7398
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5, 6.x
>Reporter: Christoph Goller
>Assignee: Alan Woodward
>Priority: Critical
> Attachments: LUCENE-7398-20160814.patch, LUCENE-7398-20160924.patch, 
> LUCENE-7398-20160925.patch, LUCENE-7398.patch, LUCENE-7398.patch, 
> LUCENE-7398.patch, TestSpanCollection.java
>
>
> Example for a nested SpanQuery that is not working:
> Document: Human Genome Organization , HUGO , is trying to coordinate gene 
> mapping research worldwide.
> Query: spanNear([body:coordinate, spanOr([spanNear([body:gene, body:mapping], 
> 0, true), body:gene]), body:research], 0, true)
> The query should match "coordinate gene mapping research" as well as 
> "coordinate gene research". It does not match  "coordinate gene mapping 
> research" with Lucene 5.5 or 6.1, it did however match with Lucene 4.10.4. It 
> probably stopped working with the changes on SpanQueries in 5.3. I will 
> attach a unit test that shows the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6664) Replace SynonymFilter with SynonymGraphFilter

2016-11-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670466#comment-15670466
 ] 

David Smiley commented on LUCENE-6664:
--

+1.  This should be experimental for now as people begin to try this out.

I don't think this patch abuses the semantics of posInc or posLen.  It's a 
shame most (all?) filters don't handle input posLen != 1 properly but that's a 
separate issue.

> Replace SynonymFilter with SynonymGraphFilter
> -
>
> Key: LUCENE-6664
> URL: https://issues.apache.org/jira/browse/LUCENE-6664
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: LUCENE-6664.patch, LUCENE-6664.patch, LUCENE-6664.patch, 
> LUCENE-6664.patch, usa.png, usa_flat.png
>
>
> Spinoff from LUCENE-6582.
> I created a new SynonymGraphFilter (to replace the current buggy
> SynonymFilter), that produces correct graphs (does no "graph
> flattening" itself).  I think this makes it simpler.
> This means you must add the FlattenGraphFilter yourself, if you are
> applying synonyms during indexing.
> Index-time syn expansion is a necessarily "lossy" graph transformation
> when multi-token (input or output) synonyms are applied, because the
> index does not store {{posLength}}, so there will always be phrase
> queries that should match but do not, and then phrase queries that
> should not match but do.
> http://blog.mikemccandless.com/2012/04/lucenes-tokenstreams-are-actually.html
> goes into detail about this.
> However, with this new SynonymGraphFilter, if instead you do synonym
> expansion at query time (and don't do the flattening), and you use
> TermAutomatonQuery (future: somehow integrated into a query parser),
> or maybe just "enumerate all paths and make union of PhraseQuery", you
> should get 100% correct matches (not sure about "proper" scoring
> though...).
> This new syn filter still cannot consume an arbitrary graph.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9774) Delta indexing with child documents with help of cacheImpl="SortedMapBackedCache"

2016-11-16 Thread Aniket Khare (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aniket Khare updated SOLR-9774:
---
Description: 
Hi,

I am using solr DIH for indexing the Parent-Child relation data and using 
cacheImpl="SortedMapBackedCache".
For Full data indexinf I am using command clean="true" and for delta I am using 
command full-import and clean="false".
So the same queries are being executed for fulland delta and indexing working 
properly.
The issue which we are facing is where for a perticuler parent document, there 
not a single child document and we are adding new child document.
Following are the steps to reproduce the issue.

1. Add Child document to an existing parent document which is not having empty 
child document.
2. Once the child document is added with delta indexing, try to modify the 
parent document and run delta indexing again
3. After the delta indexing is completed, I can see the modified child 
documents showing in Solr DIH page in debug mode. But the it is not getting 
updated in Solr collection.

I am using data config as below as below.

  

  
  
  
  


  
 


  

  


  was:
Hi,

I am using solr DIH for indexing the Parent-Child relation data and using 
cacheImpl="SortedMapBackedCache".
For Full data indexinf I am using command clean="true" and for delta I am using 
command full-import and clean="false".
So the same queries are being executed for fulland delta and indexing working 
properly.
The issue which we are facing is where for a perticuler parent document, there 
not a single child document and we are adding new child document.
Following are the steps to reproduce the issue.

1. Add Child document to an existing parent document which is not having empty 
child document.
2. Once the child document is added with delta indexing, try to modify the 
parent document and run delta indexing again
3. After the delta indexing is completed, I can see the modified child 
documents showing in Solr DIH page in debug mode. But the it is not getting 
updated in Solr collection.

I am using data config as below as below.

  

  
  
  
  


  
 


  

  



> Delta indexing with child documents with help of 
> cacheImpl="SortedMapBackedCache"
> -
>
> Key: SOLR-9774
> URL: https://issues.apache.org/jira/browse/SOLR-9774
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler, Data-driven Schema
>Affects Versions: 6.1
>Reporter: Aniket Khare
>  Labels: DIH, solr
>
> Hi,
> I am using solr DIH for indexing the Parent-Child relation data and using 
> cacheImpl="SortedMapBackedCache".
> For Full data indexinf I am using command clean="true" and for delta I am 
> using command full-import and clean="false".
> So the same queries are being executed for fulland delta and indexing working 
> properly.
> The issue which we are facing is where for a perticuler parent document, 
> there not a single child document and we are adding new child document.
> Following are the steps to reproduce the issue.
> 1. Add Child document to an existing parent document which is not having 
> empty child document.
> 2. Once the child document is added with delta indexing, try to modify the 
> parent document and run delta indexing again
> 3. After the delta indexing is completed, I can see the modified child 
> documents showing in Solr DIH page in debug mode. But the it is not getting 
> updated in Solr collection.
> I am using data config as below as below.
>   
> 
>   
>   
>   
>  cacheKey="id" cacheLookup="Parent.id" processor="SqlEntityProcessor" 
> cacheImpl="SortedMapBackedCache">
> 
> 
>   
> cacheKey="PID" cacheLookup="Parent.id" processor="SqlEntityProcessor" 
> cacheImpl="SortedMapBackedCache" child="true">
> 
>   
>   
> 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9506) cache IndexFingerprint for each segment

2016-11-16 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667956#comment-15667956
 ] 

Ishan Chattopadhyaya edited comment on SOLR-9506 at 11/16/16 1:52 PM:
--

Can we resolve this issue, since it seems it was released as part of 6.3.0? (-I 
will open another issue for the issue I wrote about two comments before- Added 
SOLR-9777).


was (Author: ichattopadhyaya):
Can we resolve this issue, since it seems it was released as part of 6.3.0? (I 
will open another issue for the issue I wrote about two comments before).

> cache IndexFingerprint for each segment
> ---
>
> Key: SOLR-9506
> URL: https://issues.apache.org/jira/browse/SOLR-9506
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9506-combined-deletion-key.patch, SOLR-9506.patch, 
> SOLR-9506.patch, SOLR-9506.patch, SOLR-9506.patch, SOLR-9506_POC.patch, 
> SOLR-9506_final.patch
>
>
> The IndexFingerprint is cached per index searcher. it is quite useless during 
> high throughput indexing. If the fingerprint is cached per segment it will 
> make it vastly more efficient to compute the fingerprint



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5944) Support updates of numeric DocValues

2016-11-16 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15667874#comment-15667874
 ] 

Ishan Chattopadhyaya edited comment on SOLR-5944 at 11/16/16 1:51 PM:
--

Added another patch. The PeerSyncTest was failing, due to fingerprint caching 
issue. This patch now depends on -SOLR-9506's 
"SOLR-9506-combined-deletion-key.patch"- SOLR-9777 patch.


was (Author: ichattopadhyaya):
Added another patch. The PeerSyncTest was failing, due to fingerprint caching 
issue. This patch now depends on SOLR-9506's 
"SOLR-9506-combined-deletion-key.patch" patch.

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.failures.tar.gz, defensive-checks.log.gz, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.62D328FA1DEA57FD.fail2.txt, 
> hoss.62D328FA1DEA57FD.fail3.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5944) Support updates of numeric DocValues

2016-11-16 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547986#comment-15547986
 ] 

Ishan Chattopadhyaya edited comment on SOLR-5944 at 11/16/16 1:50 PM:
--

Updated patch, brought up to master. Here are my replies inline (not all of 
them, but I'll keep editing this comment to provide all replies).


Ok -- it took a while, but here's my notes after reviewing the latest patch

{panel:title=DistributedUpdateProcessor}

* waitForDependentUpdates {color:green}FIXED{color}
** I know you & shalin went back and forth a bit on the wait call (ie: 
wait(100) with max retries vs wait(5000)) but i think the way things settled 
out {{bucket.wait(waitTimeout.timeLeft(TimeUnit.MILLISECONDS));}} would be 
better then a generic {{wait(5000)}}
*** consider the scenerio where: the dependent update is never going to come; a 
spurious notify/wake happens during the first "wait" call @ 4950ms; the 
lookupVersion call takes 45ms.  Now we've only got 5ms left on our original 
TimeOut, but we _could_ wind up "wait"ing another full 5s (total of 10s) unless 
we get another spurrious notify/wake inthe mean time.
** {{log.info("Fetched the update: " + missingUpdate);}} that's a really good 
candidate for templating since the AddUpdateCommand.toString() could be 
expensive if log.info winds up being a no-op (ie: {{log.info("Fetched the 
update: \{\}", missingUpdate);}})

* fetchMissingUpdateFromLeader
** In response to a previous question you said...{quote}
[FIXED. Initially, I wanted to fetch all missing updates, i.e. from what we 
have till what we want. Noble suggested that fetching only one at a time makes 
more sense.]
{quote} ... but from what i can tell skimming RTGC.processGetUpdates() it's 
still possible that multiple updates will be returned, notably in the case 
where: {{// Must return all delete-by-query commands that occur after the first 
add requested}}.  How is that possibility handled in the code paths that use 
fetchMissingUpdateFromLeader?
*** that seems like a scenerio that would be really easy to test for -- similar 
to how outOfOrderDeleteUpdatesIndividualReplicaTest works
** {{assert ((List) missingUpdates).size() == 1: "More than 1 update ...}}
*** based on my skimming of the code, an empty list is just as possible, so the 
assertion is missleading (ideally it should say how many updates it got, or 
maybe toString() the whole List ?)

{panel}


{panel:title=AtomicUpdateDocumentMerger}

* isSupportedFieldForInPlaceUpdate
** javadocs

* getFieldNamesFromIndex
** javadocs
** method name seems VERY missleading considering what it does 
{color:green}Changed it to getSearcherNonStoredDVs{color}

* isInPlaceUpdate
** javadocs should be clear what hapens to inPlaceUpdatedFields if result is 
false (even if answer is "undefined"
** based on usage, wouldn't it be simplier if instead of returning a boolean, 
this method just returned a (new) Set of inplace update fields found, and if 
the set is empty that means it's not an in place update? 
{color:green}FIXED{color}
** isn't getFieldNamesFromIndex kind of an expensive method to call on every 
AddUpdateCommand ?
*** couldn't this list of fields be created by the caller and re-used at least 
for the entire request (ie: when adding multiple docs) ? {color:green}The set 
returned is precomputed upon the opening of a searcher. The only cost I see is 
to create a new unmodifiableSet every time. I'd prefer to take up this 
optimization later, if needed.{color}
** {{if (indexFields.contains(fieldName) == false && 
schema.isDynamicField(fieldName))}}
*** why does it matter one way or the other if it's a dynamicField? 
{color:green}Changed the logic to check in the IW for presence of field. Added 
a comment: "// if dynamic field and this field doesn't exist, DV update can't 
work"{color}
** the special {{DELETED}} sentinal value still isn't being checked against the 
return value of {{getInputDocumentFromTlog}} {color:green}Not using 
getInputDocumentFromTlog call anymore{color}
** this method still seems like it could/should do "cheaper" validation (ie: 
not requiring SchemaField object creation, or tlog lookups) first.  (Ex: the 
set of supported atomic ops are checked after isSupportedFieldForInPlaceUpdate 
& a possible read from the tlog). {color:green}FIXED{color}
*** My suggested rewrite would be something like...{code}
Set candidateResults = new HashSet<>();
// first pass, check the things that are virtually free,
// and bail out early if anything is obviously not a valid in-place update
for (String fieldName : sdoc.getFieldNames()) {
  if (schema.getUniqueKeyField().getName().equals(fieldName)
  || fieldName.equals(DistributedUpdateProcessor.VERSION_FIELD)) {
continue;
  }
  Object fieldValue = sdoc.getField(fieldName).getValue();
  if (! (fieldValue instanceof Map) ) {
// not even an atomic update, definitely not an 

[jira] [Updated] (SOLR-5944) Support updates of numeric DocValues

2016-11-16 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-5944:
---
Attachment: SOLR-5944.patch

Added another test, a specific unit test for AUDM's doInPlaceUpdateMerge().

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.failures.tar.gz, defensive-checks.log.gz, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.62D328FA1DEA57FD.fail2.txt, 
> hoss.62D328FA1DEA57FD.fail3.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9777) IndexFingerprinting: use getCombinedCoreAndDeletesKey() instead of getCoreCacheKey() for per-segment caching

2016-11-16 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9777:
---
Attachment: SOLR-9777.patch

Adding the patch for this.

> IndexFingerprinting: use getCombinedCoreAndDeletesKey() instead of  
> getCoreCacheKey() for per-segment caching
> -
>
> Key: SOLR-9777
> URL: https://issues.apache.org/jira/browse/SOLR-9777
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-9777.patch
>
>
> [Note: Had initially posted to SOLR-9506, but now moved here]
> While working on SOLR-5944, I realized that the current per segment caching 
> logic works fine for deleted documents (due to comparison of numDocs in a 
> segment for the criterion of cache hit/miss). However, if a segment has 
> docValues updates, the same logic is insufficient. It is my understanding 
> that changing the key for caching from reader().getCoreCacheKey() to 
> reader().getCombinedCoreAndDeletesKey() would work here, since the docValues 
> updates are internally handled using deletion queue and hence the "combined" 
> core and deletes key would work here. Attaching a patch for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6224 - Still Unstable!

2016-11-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6224/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestNIOFSDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestNIOFSDirectory_22641BB4A284E321-001\testThreadSafety-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestNIOFSDirectory_22641BB4A284E321-001\testThreadSafety-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestNIOFSDirectory_22641BB4A284E321-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestNIOFSDirectory_22641BB4A284E321-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestNIOFSDirectory_22641BB4A284E321-001\testThreadSafety-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestNIOFSDirectory_22641BB4A284E321-001\testThreadSafety-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestNIOFSDirectory_22641BB4A284E321-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestNIOFSDirectory_22641BB4A284E321-001

at __randomizedtesting.SeedInfo.seed([22641BB4A284E321]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:323)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.handler.admin.CoreAdminHandlerTest.testDeleteInstanceDirAfterCreateFailure

Error Message:
The data directory was not cleaned up on unload after a failed core reload

Stack Trace:
java.lang.AssertionError: The data directory was not cleaned up on unload after 
a failed core reload
at 
__randomizedtesting.SeedInfo.seed([34650460BD82B8D2:4FACA62C9EA06AD3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.admin.CoreAdminHandlerTest.testDeleteInstanceDirAfterCreateFailure(CoreAdminHandlerTest.java:334)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Created] (SOLR-9777) IndexFingerprinting: use getCombinedCoreAndDeletesKey() instead of getCoreCacheKey() for per-segment caching

2016-11-16 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-9777:
--

 Summary: IndexFingerprinting: use getCombinedCoreAndDeletesKey() 
instead of  getCoreCacheKey() for per-segment caching
 Key: SOLR-9777
 URL: https://issues.apache.org/jira/browse/SOLR-9777
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ishan Chattopadhyaya


[Note: Had initially posted to SOLR-9506, but now moved here]

While working on SOLR-5944, I realized that the current per segment caching 
logic works fine for deleted documents (due to comparison of numDocs in a 
segment for the criterion of cache hit/miss). However, if a segment has 
docValues updates, the same logic is insufficient. It is my understanding that 
changing the key for caching from reader().getCoreCacheKey() to 
reader().getCombinedCoreAndDeletesKey() would work here, since the docValues 
updates are internally handled using deletion queue and hence the "combined" 
core and deletes key would work here. Attaching a patch for the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9775) NPE in QueryResultKey constructor (when executing a clustering search query?)

2016-11-16 Thread Roman Kagan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670423#comment-15670423
 ] 

Roman Kagan commented on SOLR-9775:
---

Hello [~cpoerschke]

Thanks a lot for the suggestion.  I will add the test case(s) covering the code 
fix.

Kindly,

Roman

> NPE in QueryResultKey constructor (when executing a clustering search query?)
> -
>
> Key: SOLR-9775
> URL: https://issues.apache.org/jira/browse/SOLR-9775
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Priority: Minor
>
> https://github.com/apache/lucene-solr/pull/116 from Roman Kagan yesterday 
> (November 2016) has a fix.
> On the solr-user mailing list (in March) Tim Hearn 
> [reported|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201603.mbox/%3CCAFoZJmC87Lbuj%2BaYwcVh%2B5ay_%3Dwi5n8TGs7gBPcF%3Djjo%2BW7vGg%40mail.gmail.com%3E]
>  encountering what sounds like the same NPE when executing a clustering 
> search query.
> This ticket to tentatively link the two together. Open question: do we want 
> to include a reproducing test case along with the fix or just commit the fix 
> alone?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7466) add axiomatic similarity

2016-11-16 Thread Peilin Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670373#comment-15670373
 ] 

Peilin Yang commented on LUCENE-7466:
-

ok, will add test cases

> add axiomatic similarity 
> -
>
> Key: LUCENE-7466
> URL: https://issues.apache.org/jira/browse/LUCENE-7466
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Peilin Yang
>Assignee: Tommaso Teofili
>  Labels: patch
>
> Add axiomatic similarity approaches to the similarity family.
> More details can be found at http://dl.acm.org/citation.cfm?id=1076116 and 
> https://www.eecis.udel.edu/~hfang/pubs/sigir05-axiom.pdf
> There are in total six similarity models. All of them are based on BM25, 
> Pivoted Document Length Normalization or Language Model with Dirichlet prior. 
> We think it is worthy to add the models as part of Lucene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9552) Upgrade to Tika 1.14 when available

2016-11-16 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670318#comment-15670318
 ] 

Tim Allison commented on SOLR-9552:
---

Make sure not to include the grobid parser because of change in ASF's 
determination on [json.org|https://www.apache.org/legal/resolved#category-x].  
See [this on 
user@|https://lists.apache.org/thread.html/fc2504fc1a0ef71ebfaf8b4c0b48620290e1df505bf8f9984fb447e5@%3Cuser.tika.apache.org%3E]
 and TIKA-1804.

> Upgrade to Tika 1.14 when available
> ---
>
> Key: SOLR-9552
> URL: https://issues.apache.org/jira/browse/SOLR-9552
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Reporter: Tim Allison
>
>  Let's upgrade Solr as soon as 1.14 is available.
> P.S. I _think_ we're soon to wrap up work on 1.14.  Any last requests? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9728) Ability to specify Key Store type in solr.in file for SSL

2016-11-16 Thread Michael Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670293#comment-15670293
 ] 

Michael Suzuki commented on SOLR-9728:
--

Please note that this patch does not include the fix to issue SOLR-9727, should 
I redo the patch to include both jetty-ssl.xml and the fix for SOLR-9727?

> Ability to specify Key Store type in solr.in file for SSL
> -
>
> Key: SOLR-9728
> URL: https://issues.apache.org/jira/browse/SOLR-9728
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: master (7.0)
>Reporter: Michael Suzuki
> Attachments: SOLR-9728.patch
>
>
> At present when ssl is enabled we can't set the SSL type. It currently 
> defaults to JCK.
> As a user I would like to configure the SSL type via the solr.in file.
> For instance "JCEKS" would be configured as:
> {code}
> SOLR_SSL_KEYSTORE_TYPE=JCEKS
> SOLR_SSL_TRUSTSTORE_TYPE=JCEKS
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9728) Ability to specify Key Store type in solr.in file for SSL

2016-11-16 Thread Michael Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670285#comment-15670285
 ] 

Michael Suzuki commented on SOLR-9728:
--

[~manokovacs] yes, you are correct. Appologies for not including the 
jetty-ssl.xml changes.

> Ability to specify Key Store type in solr.in file for SSL
> -
>
> Key: SOLR-9728
> URL: https://issues.apache.org/jira/browse/SOLR-9728
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: master (7.0)
>Reporter: Michael Suzuki
> Attachments: SOLR-9728.patch
>
>
> At present when ssl is enabled we can't set the SSL type. It currently 
> defaults to JCK.
> As a user I would like to configure the SSL type via the solr.in file.
> For instance "JCEKS" would be configured as:
> {code}
> SOLR_SSL_KEYSTORE_TYPE=JCEKS
> SOLR_SSL_TRUSTSTORE_TYPE=JCEKS
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7274) Add LogisticRegressionDocumentClassifier

2016-11-16 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669850#comment-15669850
 ] 

Tommaso Teofili edited comment on LUCENE-7274 at 11/16/16 12:05 PM:


Hi [~caomanhdat], thanks for your patch.
A couple of comments:
- I think it'd be good if we could make it a {{LogisticRegressionClassifier}} 
and then extend it into a {{LogisticRegressionDocumentClassifier}} (like for 
{{KNearestNeighbourClassifier}}.
- IIUTC this implementation assumes each feature is stored in a separate field 
and the weights to be computed externally as a _double[]_ , can this work for 
example with Solr's capabilities to store AI models ?
- regarding the labels, wouldn't it be better to declare the classifier as a 
{{Classifier}} (it's a binary classifier in the end)?
- the changes to NumericDocValues, FloatDocValues and DoubleDocValues break 
some lucene/core tests as it seems your patched NumericDocValues always returns 
a Long while FloatDV and DoubleDV convert such a Long value to an Integer and 
then back to a Float / Double using Float.intBitsToFloat / 
Double.intBitsToDouble, can you clarify if / why that is needed ?


was (Author: teofili):
Hi [~caomanhdat], thanks for your patch.
A couple of comments:
- I think it'd be good if we could make it a {{LogisticRegressionClassifier}} 
and then extend it into a {{LogisticRegressionDocumentClassifier}} (like for 
{{KNearestNeighbourClassifier}}.
- IIUTC this implementation assumes each feature is stored in a separate field 
and the weights to be computed externally as a _double[]_ , can this work for 
example with Solr's capabilities to store AI models ?
- regarding the labels, wouldn't it be better to declare the classifier as a 
{{Classifier}} (it's a binary classifier in the end)?

> Add LogisticRegressionDocumentClassifier
> 
>
> Key: LUCENE-7274
> URL: https://issues.apache.org/jira/browse/LUCENE-7274
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Cao Manh Dat
>Assignee: Tommaso Teofili
> Attachments: LUCENE-7274.patch
>
>
> Add LogisticRegressionDocumentClassifier for Lucene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >