[jira] [Comment Edited] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-05-05 Thread Susmit Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273650#comment-15273650
 ] 

Susmit Shukla edited comment on SOLR-8297 at 5/6/16 5:44 AM:
-

I had the exact same requirement as mentioned in B) functional enhancements. I 
implemented it by extending the JoinQParserPlugin and registering the parser in 
solrconfig.xml. I don't think the solution is ready for open source yet. Two 
reasons for that as Eric already mentioned -

- Enabling sharded join where both collections have to be equally sharded and 
replicated on the same router.field with same hash range distribution among 
named shards is a narrow use case 
- Solution is restricted to solr cloud layout where corresponding shards of 
'from' and 'to' collections run in the same jvm

Initially my impl was same as the above patch but it failed in a bigger 
deployment where multiple shards ran in same jvm. e.g it should support join 
for this layout-

coll1:   coll2:
shard1::8983   shard1::8983
shard2::8983   shard2::8983


needed to match both shard name and node name for this case to work
overridden two methods: findLocalReplicaForFromIndex, createParser
to get current shard name - toShardId = 
queryRequest.getCore().getCoreDescriptor().getCloudDescriptor().getShardId();
queryRequest (SolrQueryRequest) member variable can set in the createParser 

toShardId.equals(slice.getName()) should be additional condition here - if 
(replica.getNodeName().equals(nodeName) && replica.getState() == 
Replica.State.ACTIVE)


was (Author: shukla.sus...@gmail.com):
I had the exact same requirement as mentioned in B) functional enhancements. I 
implemented it by extending the JoinQParserPlugin and registering the parser in 
solrconfig.xml. I don't think the solution is ready for open source yet. Two 
reasons for that as Eric already mentioned -

- Enabling sharded join where both collections have to be equally sharded and 
replicated on the same router.field with same hash range distribution among 
named shards is a narrow use case 
- Solution is restricted to solr cloud layout where corresponding shards of 
'from' and 'to' collections run in the same jvm

Initially my impl was same as the above patch but it failed in a bigger 
deployment where multiple shards ran in same jvm. e.g it should support join 
for this layout-

coll1: coll2:
shard1::8983   shard1::8983
shard2::8983   shard2::8983


needed to match both shard name and node name for this case to work
overridden two methods: findLocalReplicaForFromIndex, createParser
to get current shard name - toShardId = 
queryRequest.getCore().getCoreDescriptor().getCloudDescriptor().getShardId();
queryRequest (SolrQueryRequest) member variable can set in the createParser 

toShardId.equals(slice.getName()) should be additional condition here - if 
(replica.getNodeName().equals(nodeName) && replica.getState() == 
Replica.State.ACTIVE)

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 

[jira] [Comment Edited] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-05-05 Thread Susmit Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273650#comment-15273650
 ] 

Susmit Shukla edited comment on SOLR-8297 at 5/6/16 5:44 AM:
-

I had the exact same requirement as mentioned in B) functional enhancements. I 
implemented it by extending the JoinQParserPlugin and registering the parser in 
solrconfig.xml. I don't think the solution is ready for open source yet. Two 
reasons for that as Eric already mentioned -

- Enabling sharded join where both collections have to be equally sharded and 
replicated on the same router.field with same hash range distribution among 
named shards is a narrow use case 
- Solution is restricted to solr cloud layout where corresponding shards of 
'from' and 'to' collections run in the same jvm

Initially my impl was same as the above patch but it failed in a bigger 
deployment where multiple shards ran in same jvm. e.g it should support join 
for this layout-

coll1: coll2:
shard1::8983   shard1::8983
shard2::8983   shard2::8983


needed to match both shard name and node name for this case to work
overridden two methods: findLocalReplicaForFromIndex, createParser
to get current shard name - toShardId = 
queryRequest.getCore().getCoreDescriptor().getCloudDescriptor().getShardId();
queryRequest (SolrQueryRequest) member variable can set in the createParser 

toShardId.equals(slice.getName()) should be additional condition here - if 
(replica.getNodeName().equals(nodeName) && replica.getState() == 
Replica.State.ACTIVE)


was (Author: shukla.sus...@gmail.com):
I had the exact same requirement as mentioned in B) functional enhancements. I 
implemented it by extending the JoinQParserPlugin and registering the parser in 
solrconfig.xml. I don't think the solution is ready for open source yet. Two 
reasons for that as Eric already mentioned -

- Enabling sharded join where both collections have to be equally sharded and 
replicated on the same router.field with same hash range distribution among 
named shards is a narrow use case 
- Solution is restricted to solr cloud layout where corresponding shards of 
'from' and 'to' collections run in the same jvm

Initially my impl was same as the above patch but it failed in a bigger 
deployment where multiple shards ran in same jvm. e.g it should support join 
for this layout-

{html}


coll1: coll2:
shard1::8983   shard1::8983
shard2::8983   shard2::8983

{html}

needed to match both shard name and node name for this case to work
overridden two methods: findLocalReplicaForFromIndex, createParser
to get current shard name - toShardId = 
queryRequest.getCore().getCoreDescriptor().getCloudDescriptor().getShardId();
queryRequest (SolrQueryRequest) member variable can set in the createParser 

toShardId.equals(slice.getName()) should be additional condition here - if 
(replica.getNodeName().equals(nodeName) && replica.getState() == 
Replica.State.ACTIVE)

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions 

[jira] [Comment Edited] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-05-05 Thread Susmit Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273650#comment-15273650
 ] 

Susmit Shukla edited comment on SOLR-8297 at 5/6/16 5:43 AM:
-

I had the exact same requirement as mentioned in B) functional enhancements. I 
implemented it by extending the JoinQParserPlugin and registering the parser in 
solrconfig.xml. I don't think the solution is ready for open source yet. Two 
reasons for that as Eric already mentioned -

- Enabling sharded join where both collections have to be equally sharded and 
replicated on the same router.field with same hash range distribution among 
named shards is a narrow use case 
- Solution is restricted to solr cloud layout where corresponding shards of 
'from' and 'to' collections run in the same jvm

Initially my impl was same as the above patch but it failed in a bigger 
deployment where multiple shards ran in same jvm. e.g it should support join 
for this layout-

{html}


coll1: coll2:
shard1::8983   shard1::8983
shard2::8983   shard2::8983

{html}

needed to match both shard name and node name for this case to work
overridden two methods: findLocalReplicaForFromIndex, createParser
to get current shard name - toShardId = 
queryRequest.getCore().getCoreDescriptor().getCloudDescriptor().getShardId();
queryRequest (SolrQueryRequest) member variable can set in the createParser 

toShardId.equals(slice.getName()) should be additional condition here - if 
(replica.getNodeName().equals(nodeName) && replica.getState() == 
Replica.State.ACTIVE)


was (Author: shukla.sus...@gmail.com):
I had the exact same requirement as mentioned in B) functional enhancements. I 
implemented it by extending the JoinQParserPlugin and registering the parser in 
solrconfig.xml. I don't think the solution is ready for open source yet. Two 
reasons for that as Eric already mentioned -

- Enabling sharded join where both collections have to be equally sharded and 
replicated on the same router.field with same hash range distribution among 
named shards is a narrow use case 
- Solution is restricted to solr cloud layout where corresponding shards of 
'from' and 'to' collections run in the same jvm

Initially my impl was same as the above patch but it failed in a bigger 
deployment where multiple shards ran in same jvm. e.g it should support join 
for this layout-

coll1: coll2:
shard1::8983   shard1::8983
shard2::8983   shard2::8983

needed to match both shard name and node name for this case to work
overridden two methods: findLocalReplicaForFromIndex, createParser
to get current shard name - toShardId = 
queryRequest.getCore().getCoreDescriptor().getCloudDescriptor().getShardId();
queryRequest (SolrQueryRequest) member variable can set in the createParser 

toShardId.equals(slice.getName()) should be additional condition here - if 
(replica.getNodeName().equals(nodeName) && replica.getState() == 
Replica.State.ACTIVE)

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem 

Re: lucene-solr:master: SOLR-8972: Add GraphHandler and GraphMLResponseWriter to support graph visualizations

2016-05-05 Thread Chris Hostetter

Joel: adding /graph to the list of ImplicitPlugins has broken 
MinimalSchemaTest.testAllConfiguredHandlers (see recent jenkins failures)



: Date: Thu,  5 May 2016 20:29:33 + (UTC)
: From: jbern...@apache.org
: Reply-To: dev@lucene.apache.org
: To: comm...@lucene.apache.org
: Subject: lucene-solr:master: SOLR-8972: Add GraphHandler and
: GraphMLResponseWriter to support graph visualizations
: 
: Repository: lucene-solr
: Updated Branches:
:   refs/heads/master 7d4f38738 -> be1cb9a1c
: 
: 
: SOLR-8972: Add GraphHandler and GraphMLResponseWriter to support graph 
visualizations
: 
: 
: Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
: Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/be1cb9a1
: Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/be1cb9a1
: Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/be1cb9a1
: 
: Branch: refs/heads/master
: Commit: be1cb9a1cde4dd426305f22620734d018f21dd82
: Parents: 7d4f387
: Author: jbernste 
: Authored: Thu May 5 14:27:05 2016 -0400
: Committer: jbernste 
: Committed: Thu May 5 16:36:19 2016 -0400
: 
: --
:  .../src/java/org/apache/solr/core/SolrCore.java |   1 +
:  .../org/apache/solr/handler/GraphHandler.java   | 282 +++
:  .../solr/response/GraphMLResponseWriter.java| 167 +++
:  solr/core/src/resources/ImplicitPlugins.json|   7 +
:  .../response/TestGraphMLResponseWriter.java | 155 ++
:  .../solrj/io/graph/GraphExpressionTest.java | 173 +---
:  .../solrj/io/stream/StreamExpressionTest.java   |   1 -
:  7 files changed, 743 insertions(+), 43 deletions(-)
: --
: 
: 
: 
http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/be1cb9a1/solr/core/src/java/org/apache/solr/core/SolrCore.java
: --
: diff --git a/solr/core/src/java/org/apache/solr/core/SolrCore.java 
b/solr/core/src/java/org/apache/solr/core/SolrCore.java
: index b94b3d8..d5cde16 100644
: --- a/solr/core/src/java/org/apache/solr/core/SolrCore.java
: +++ b/solr/core/src/java/org/apache/solr/core/SolrCore.java
: @@ -2111,6 +2111,7 @@ public final class SolrCore implements SolrInfoMBean, 
Closeable {
:  m.put("standard", m.get("xml"));
:  m.put(CommonParams.JSON, new JSONResponseWriter());
:  m.put("geojson", new GeoJSONResponseWriter());
: +m.put("graphml", new GraphMLResponseWriter());
:  m.put("python", new PythonResponseWriter());
:  m.put("php", new PHPResponseWriter());
:  m.put("phps", new PHPSerializedResponseWriter());
: 
: 
http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/be1cb9a1/solr/core/src/java/org/apache/solr/handler/GraphHandler.java
: --
: diff --git a/solr/core/src/java/org/apache/solr/handler/GraphHandler.java 
b/solr/core/src/java/org/apache/solr/handler/GraphHandler.java
: new file mode 100644
: index 000..a6e2ce1
: --- /dev/null
: +++ b/solr/core/src/java/org/apache/solr/handler/GraphHandler.java
: @@ -0,0 +1,282 @@
: +package org.apache.solr.handler;
: +
: +/*
: + * Licensed to the Apache Software Foundation (ASF) under one or more
: + * contributor license agreements.  See the NOTICE file distributed with
: + * this work for additional information regarding copyright ownership.
: + * The ASF licenses this file to You under the Apache License, Version 2.0
: + * (the "License"); you may not use this file except in compliance with
: + * the License.  You may obtain a copy of the License at
: + *
: + * http://www.apache.org/licenses/LICENSE-2.0
: + *
: + * Unless required by applicable law or agreed to in writing, software
: + * distributed under the License is distributed on an "AS IS" BASIS,
: + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
: + * See the License for the specific language governing permissions and
: + * limitations under the License.
: + */
: +
: +import java.io.IOException;
: +import java.lang.invoke.MethodHandles;
: +import java.util.HashMap;
: +import java.util.List;
: +import java.util.Map;
: +import java.util.Map.Entry;
: +
: +import org.apache.solr.client.solrj.io.SolrClientCache;
: +import org.apache.solr.client.solrj.io.Tuple;
: +import org.apache.solr.client.solrj.io.comp.StreamComparator;
: +import org.apache.solr.client.solrj.io.graph.GatherNodesStream;
: +import org.apache.solr.client.solrj.io.graph.ShortestPathStream;
: +import org.apache.solr.client.solrj.io.graph.Traversal;
: +import org.apache.solr.client.solrj.io.ops.ConcatOperation;
: +import org.apache.solr.client.solrj.io.ops.DistinctOperation;
: +import org.apache.solr.client.solrj.io.ops.GroupOperation;
: +import org.apache.solr.client.solrj.io.ops.ReplaceOperation;
: +import 

[jira] [Commented] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-05-05 Thread Susmit Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273650#comment-15273650
 ] 

Susmit Shukla commented on SOLR-8297:
-

I had the exact same requirement as mentioned in B) functional enhancements. I 
implemented it by extending the JoinQParserPlugin and registering the parser in 
solrconfig.xml. I don't think the solution is ready for open source yet. Two 
reasons for that as Eric already mentioned -

- Enabling sharded join where both collections have to be equally sharded and 
replicated on the same router.field with same hash range distribution among 
named shards is a narrow use case 
- Solution is restricted to solr cloud layout where corresponding shards of 
'from' and 'to' collections run in the same jvm

Initially my impl was same as the above patch but it failed in a bigger 
deployment where multiple shards ran in same jvm. e.g it should support join 
for this layout-

coll1: coll2:
shard1::8983   shard1::8983
shard2::8983   shard2::8983

needed to match both shard name and node name for this case to work
overridden two methods: findLocalReplicaForFromIndex, createParser
to get current shard name - toShardId = 
queryRequest.getCore().getCoreDescriptor().getCloudDescriptor().getShardId();
queryRequest (SolrQueryRequest) member variable can set in the createParser 

toShardId.equals(slice.getName()) should be additional condition here - if 
(replica.getNodeName().equals(nodeName) && replica.getState() == 
Replica.State.ACTIVE)

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 560 - Still Failing!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/560/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers

Error Message:
exception w/handler: '/graph'

Stack Trace:
java.lang.RuntimeException: exception w/handler: '/graph'
at 
__randomizedtesting.SeedInfo.seed([1C8FEEA6438A18BA:F191559CC887193E]:0)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:785)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:121)
... 39 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; 
Content is not allowed in prolog.
at 

[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 577 - Still Failing!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/577/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers

Error Message:
exception w/handler: '/graph'

Stack Trace:
java.lang.RuntimeException: exception w/handler: '/graph'
at 
__randomizedtesting.SeedInfo.seed([FC5C7046029253A8:1142CB7C899F522C]:0)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:786)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:121)
... 39 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; 
Content is not allowed in prolog.
at 

[jira] [Created] (SOLR-9077) Streaming expression in solr doesnot support collection alias

2016-05-05 Thread Suds (JIRA)
Suds created SOLR-9077:
--

 Summary: Streaming expression in solr doesnot support collection 
alias
 Key: SOLR-9077
 URL: https://issues.apache.org/jira/browse/SOLR-9077
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.5.1
Reporter: Suds
Priority: Minor


Streaming expression in solr does not support collection alias

when I tried to access collection alias I get null pointer exception 

issue seems to be related to following code , clusterState.getActiveSlices 
returns null 

 Collection slices = clusterState.getActiveSlices(this.collection);

 for(Slice slice : slices) {
}


fix seems to fairly simple , clusterState.getActiveSlices can be made aware of 
collection alias. I am not sure what will happen when we have large alias which 
has hundred of slices.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.8.0_92) - Build # 268 - Still Failing!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/268/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Doc with id=2 not found in 
http://127.0.0.1:34298/eu/c8n_1x2_leader_session_loss due to: Path not found: 
/id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=2 not found in 
http://127.0.0.1:34298/eu/c8n_1x2_leader_session_loss due to: Path not found: 
/id; rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([776BED03E52F8C15:FF3FD2D94BD3E1ED]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:604)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:559)
at 
org.apache.solr.cloud.HttpPartitionTest.testLeaderZkSessionLoss(HttpPartitionTest.java:507)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:120)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 484 - Still Failing

2016-05-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/484/

No tests ran.

Build Log:
[...truncated 40519 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.02 sec (6.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 28.6 MB in 0.02 sec (1190.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 63.0 MB in 0.05 sec (1188.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 73.5 MB in 0.07 sec (1118.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6003 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6003 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 218 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   5.5.1
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1414, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1358, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1396, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, gitRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 590, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
gitRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 736, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(version, unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1351, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/build.xml:536:
 exec returned: 1

Total time: 29 minutes 56 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-6.x - Build # 185 - Still Failing

2016-05-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/185/

2 tests failed.
FAILED:  org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers

Error Message:
exception w/handler: '/graph'

Stack Trace:
java.lang.RuntimeException: exception w/handler: '/graph'
at 
__randomizedtesting.SeedInfo.seed([D139ED78D39F07E7:3C27564258920663]:0)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:786)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:121)
... 39 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; 
Content is not allowed in prolog.
at 
com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:257)
at 

[jira] [Commented] (SOLR-8996) Add Random Streaming Expression

2016-05-05 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273600#comment-15273600
 ] 

Erick Erickson commented on SOLR-8996:
--

Works for me. Let me see, 1000! is 
4.0238726 E+2567 =

so...er...relatively rare ;)

Even 100 is really, really rare...
9.332621544 E+157

I _love_ the net. There's a factorial calculator that it took me about 30 
seconds to find..

> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: RandomStream.java, 
> SOLR-8996-decrease-failure-probability.patch, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+116) - Build # 16666 - Still Failing!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/1/
Java: 64bit/jdk-9-ea+116 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudBackupRestore.test

Error Message:
Error from server at http://127.0.0.1:43323/solr: Collection 
'backuprestore_restored' exists, no action taken.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:43323/solr: Collection 'backuprestore_restored' 
exists, no action taken.
at 
__randomizedtesting.SeedInfo.seed([F8C4277A3416B48D:709018A09AEAD975]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:366)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1192)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:962)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:898)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.TestCloudBackupRestore.testBackupAndRestore(TestCloudBackupRestore.java:178)
at 
org.apache.solr.cloud.TestCloudBackupRestore.test(TestCloudBackupRestore.java:110)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Updated] (LUCENE-7274) Add LogisticRegressionDocumentClassifier

2016-05-05 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated LUCENE-7274:
-
Attachment: LUCENE-7274.patch

Initial patch that support classify doc based on input weights and fields.

> Add LogisticRegressionDocumentClassifier
> 
>
> Key: LUCENE-7274
> URL: https://issues.apache.org/jira/browse/LUCENE-7274
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Cao Manh Dat
> Attachments: LUCENE-7274.patch
>
>
> Add LogisticRegressionDocumentClassifier for Lucene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7274) Add LogisticRegressionDocumentClassifier

2016-05-05 Thread Cao Manh Dat (JIRA)
Cao Manh Dat created LUCENE-7274:


 Summary: Add LogisticRegressionDocumentClassifier
 Key: LUCENE-7274
 URL: https://issues.apache.org/jira/browse/LUCENE-7274
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/classification
Reporter: Cao Manh Dat


Add LogisticRegressionDocumentClassifier for Lucene.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8996) Add Random Streaming Expression

2016-05-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273569#comment-15273569
 ] 

Dennis Gove commented on SOLR-8996:
---

Valid question and I did give that some thought. 

Because this is testing randomness, I can't think of a way to make it pass 100% 
of the time. There is still a probability (much much much smaller now) that two 
RandomStreams will return the documents in the same order. By increasing the # 
of documents that probability has become an effective 'never gonna happen' 
(until it does, of course).

Assuming the random field type is truly random this works out to 1000! possible 
distinct list of tuples (because tuple order matters in the test) and the 
probability of the two streams in the test resulting in the same order is 
infinitesimally small. That said, technically it's not impossible so an updated 
message might be worthwhile.

> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: RandomStream.java, 
> SOLR-8996-decrease-failure-probability.patch, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8996) Add Random Streaming Expression

2016-05-05 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273555#comment-15273555
 ] 

Erick Erickson commented on SOLR-8996:
--

I've gotta ask (without even looking at the code) why decreasing the 
probability of failure is good enough?

Feel free to say "that's the best we can do" and blow the question off, just 
askin'.

And if it does fail, can we include as part of the message that "very 
occasional failures are acceptable"?

> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: RandomStream.java, 
> SOLR-8996-decrease-failure-probability.patch, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.5 - Build # 12 - Still Failing

2016-05-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.5/12/

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 2 object(s) that were not released!!! [NRTCachingDirectory, 
NRTCachingDirectory]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 2 object(s) that were not 
released!!! [NRTCachingDirectory, NRTCachingDirectory]
at __randomizedtesting.SeedInfo.seed([78DC986344E9E28]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.search.TestIndexSearcher.testReopen

Error Message:
expected:<1> but was:<6>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<6>
at 
__randomizedtesting.SeedInfo.seed([78DC986344E9E28:2BC518904772110B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.search.TestIndexSearcher.testReopen(TestIndexSearcher.java:161)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Comment Edited] (SOLR-8208) DocTransformer executes sub-queries

2016-05-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273532#comment-15273532
 ] 

Cao Manh Dat edited comment on SOLR-8208 at 5/6/16 2:15 AM:


Great patch! I think {{doInSuspension}} is good (better than swap in/swap out 
try catch) and we should add {{doInSuspension}}  to SolrRequestInfo (to prevent 
anyone who try to do swap in/swap out in the future).


was (Author: caomanhdat):
Great patch! I think {code}doInSuspension{code} is good (better than swap 
in/swap out try catch) and we should add {code}doInSuspension{code} to 
SolrRequestInfo (to prevent anyone who try to do swap in/swap out in the 
future).

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.diff, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call it sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can specify subquery parameter prefix:
> {code}
> ..=name_s:john=*,depts:[subquery fromIndex=departments]&
> depts.q={!term f=dept_id_s 
> v=$row.dept_ss_dv}=text_t,dept_id_s_dv=12=id 
> desc
> {code}   
> response is like
> {code}   
> 
> ...
> 
> 
> 1
> john
> ..
> 
> 
> Engineering
> These guys develop stuff
> 
> 
> Support
> These guys help users
> 
> 
> 
> 
> 
> {code}   
> * {{fl=depts:\[subquery]}} executes a separate request for every query result 
> row, and adds it into a document as a separate result list. The given field 
> name (here it's 'depts') is used as a prefix to shift subquery parameters 
> from main query parameter, eg {{depts.q}} turns to {{q}} for subquery, 
> {{depts.rows}} to {{rows}}.
> * document fields are available as implicit parameters with prefix {{row.}} 
> eg. if result document has a field {{dept_id}} it can be referred as 
> {{v=$row.dept_id}} this combines well with \{!terms} query parser   
> * {{separator=','}} is used when multiple field values are combined in 
> parameter. eg. a document has multivalue field {code}dept_ids={2,3}{code}, 
> thus referring to it via {code}..={!terms f=id 
> v=$row.dept_ids}&..{code} executes a subquery {code}{!terms f=id}2,3{code}. 
> When omitted  it's a comma. 
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> However, it doesn't work on cloud setup (and will let you know), but it's 
> proposed to use regular params ({{collection}}, {{shards}} - whatever, with 
> subquery prefix as below ) to issue subquery to a collection
> {code}
> q=name_s:dave=true=*,depts:[subquery]=20&
> depts.q={!terms f=dept_id_s v=$row.dept_ss_dv}=text_t&
> depts.indent=true&
> depts.collection=departments&
> depts.rows=10=q,fl,rows,row.dept_ss_dv
> {code}
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8208) DocTransformer executes sub-queries

2016-05-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273532#comment-15273532
 ] 

Cao Manh Dat commented on SOLR-8208:


Great patch! I think {code}doInSuspension{code} is good (better than swap 
in/swap out try catch) and we should add {code}doInSuspension{code} to 
SolrRequestInfo (to prevent anyone who try to do swap in/swap out in the 
future).

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.diff, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call it sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can specify subquery parameter prefix:
> {code}
> ..=name_s:john=*,depts:[subquery fromIndex=departments]&
> depts.q={!term f=dept_id_s 
> v=$row.dept_ss_dv}=text_t,dept_id_s_dv=12=id 
> desc
> {code}   
> response is like
> {code}   
> 
> ...
> 
> 
> 1
> john
> ..
> 
> 
> Engineering
> These guys develop stuff
> 
> 
> Support
> These guys help users
> 
> 
> 
> 
> 
> {code}   
> * {{fl=depts:\[subquery]}} executes a separate request for every query result 
> row, and adds it into a document as a separate result list. The given field 
> name (here it's 'depts') is used as a prefix to shift subquery parameters 
> from main query parameter, eg {{depts.q}} turns to {{q}} for subquery, 
> {{depts.rows}} to {{rows}}.
> * document fields are available as implicit parameters with prefix {{row.}} 
> eg. if result document has a field {{dept_id}} it can be referred as 
> {{v=$row.dept_id}} this combines well with \{!terms} query parser   
> * {{separator=','}} is used when multiple field values are combined in 
> parameter. eg. a document has multivalue field {code}dept_ids={2,3}{code}, 
> thus referring to it via {code}..={!terms f=id 
> v=$row.dept_ids}&..{code} executes a subquery {code}{!terms f=id}2,3{code}. 
> When omitted  it's a comma. 
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> However, it doesn't work on cloud setup (and will let you know), but it's 
> proposed to use regular params ({{collection}}, {{shards}} - whatever, with 
> subquery prefix as below ) to issue subquery to a collection
> {code}
> q=name_s:dave=true=*,depts:[subquery]=20&
> depts.q={!terms f=dept_id_s v=$row.dept_ss_dv}=text_t&
> depts.indent=true&
> depts.collection=departments&
> depts.rows=10=q,fl,rows,row.dept_ss_dv
> {code}
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 118 - Still Failing!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/118/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers

Error Message:
exception w/handler: '/graph'

Stack Trace:
java.lang.RuntimeException: exception w/handler: '/graph'
at 
__randomizedtesting.SeedInfo.seed([9DA0C6DBE7203F62:70BE7DE16C2D3EE6]:0)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:786)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:121)
... 39 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; 
Content is not allowed in prolog.
at 

[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+116) - Build # 576 - Still Failing!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/576/
Java: 64bit/jdk-9-ea+116 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers

Error Message:
exception w/handler: '/graph'

Stack Trace:
java.lang.RuntimeException: exception w/handler: '/graph'
at 
__randomizedtesting.SeedInfo.seed([1042F6400624135D:FD5C4D7A8D2912D9]:0)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:130)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)
Caused by: java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:786)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:121)
... 39 more
Caused by: org.xml.sax.SAXParseException; 

[jira] [Updated] (SOLR-9058) hashJoin does not work when "on" maps fields with "="

2016-05-05 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-9058:
--
Affects Version/s: 6.0

> hashJoin does not work when "on" maps fields with "="
> -
>
> Key: SOLR-9058
> URL: https://issues.apache.org/jira/browse/SOLR-9058
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Stephan Osthold
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.1, master
>
> Attachments: SOLR-9058.patch
>
>
> hashJoin does not work when "on" maps fields with "="
> eg.
> hashJoin(
>  ...
>  on="field1=field2"
> )
> See link for fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9058) hashJoin does not work when "on" maps fields with "="

2016-05-05 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-9058:
--
Fix Version/s: master
   6.1

> hashJoin does not work when "on" maps fields with "="
> -
>
> Key: SOLR-9058
> URL: https://issues.apache.org/jira/browse/SOLR-9058
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Stephan Osthold
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.1, master
>
> Attachments: SOLR-9058.patch
>
>
> hashJoin does not work when "on" maps fields with "="
> eg.
> hashJoin(
>  ...
>  on="field1=field2"
> )
> See link for fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.7.0_80) - Build # 267 - Still Failing!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/267/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.search.TestIndexSearcher.testReopen

Error Message:
expected:<1> but was:<6>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<6>
at 
__randomizedtesting.SeedInfo.seed([AB2AC4410E264C0F:876215577D1AC32C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.search.TestIndexSearcher.testReopen(TestIndexSearcher.java:161)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at 

[jira] [Commented] (SOLR-9058) hashJoin does not work when "on" maps fields with "="

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273388#comment-15273388
 ] 

ASF subversion and git services commented on SOLR-9058:
---

Commit d95a91a9cca341d7633d339bf56b08ecd59d1c2a in lucene-solr's branch 
refs/heads/branch_6x from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d95a91a ]

SOLR-9058: Makes HashJoinStream and OuterHashJoinStream support different field 
names in the incoming streams, eg. fieldA=fieldB


> hashJoin does not work when "on" maps fields with "="
> -
>
> Key: SOLR-9058
> URL: https://issues.apache.org/jira/browse/SOLR-9058
> Project: Solr
>  Issue Type: Bug
>Reporter: Stephan Osthold
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-9058.patch
>
>
> hashJoin does not work when "on" maps fields with "="
> eg.
> hashJoin(
>  ...
>  on="field1=field2"
> )
> See link for fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9058) hashJoin does not work when "on" maps fields with "="

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273386#comment-15273386
 ] 

ASF subversion and git services commented on SOLR-9058:
---

Commit c7929f8b851dd12d3ae1b9834058428394821790 in lucene-solr's branch 
refs/heads/master from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c7929f8 ]

SOLR-9058: Makes HashJoinStream and OuterHashJoinStream support different field 
names in the incoming streams, eg. fieldA=fieldB


> hashJoin does not work when "on" maps fields with "="
> -
>
> Key: SOLR-9058
> URL: https://issues.apache.org/jira/browse/SOLR-9058
> Project: Solr
>  Issue Type: Bug
>Reporter: Stephan Osthold
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-9058.patch
>
>
> hashJoin does not work when "on" maps fields with "="
> eg.
> hashJoin(
>  ...
>  on="field1=field2"
> )
> See link for fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3250 - Still Failing!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3250/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

No tests ran.

Build Log:
[...truncated 11461 lines...]
ERROR: Connection was broken: java.io.IOException: Unexpected termination of 
the channel
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50)
Caused by: java.io.EOFException
at 
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2325)
at 
java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2794)
at 
java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:801)
at java.io.ObjectInputStream.(ObjectInputStream.java:299)
at 
hudson.remoting.ObjectInputStreamEx.(ObjectInputStreamEx.java:48)
at 
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)

Build step 'Invoke Ant' marked build as failure
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-master-MacOSX #3250
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-master-MacOSX #3250
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-master-MacOSX #3250
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Closed] (SOLR-8996) Add Random Streaming Expression

2016-05-05 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove closed SOLR-8996.
-
Resolution: Fixed
  Assignee: Joel Bernstein  (was: Dennis Gove)

New test applied in master and 6.1

> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: RandomStream.java, 
> SOLR-8996-decrease-failure-probability.patch, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8996) Add Random Streaming Expression

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273343#comment-15273343
 ] 

ASF subversion and git services commented on SOLR-8996:
---

Commit 06a675ce2c57e6fc1adf18d88528d578e79a3463 in lucene-solr's branch 
refs/heads/branch_6x from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=06a675c ]

SOLR-8996: Greatly decreases the probability of a RandomStream test failure 
from 1 in 5! to 1 in 1000!


> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Dennis Gove
> Fix For: 6.1
>
> Attachments: RandomStream.java, 
> SOLR-8996-decrease-failure-probability.patch, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8467) CloudSolrStream and FacetStream should take a SolrParams object rather than a Map<String, String> to allow more complex Solr queries to be specified

2016-05-05 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273334#comment-15273334
 ] 

Erick Erickson commented on SOLR-8467:
--

Every day or two I pull the latest master and re-apply this set of changes. 
Today was the first day that was at all difficult.

Please ping me before you start working on it, it may by that I just haven't 
put up my most recent patch.

> CloudSolrStream and FacetStream should take a SolrParams object rather than a 
> Map to allow more complex Solr queries to be specified
> 
>
> Key: SOLR-8467
> URL: https://issues.apache.org/jira/browse/SOLR-8467
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-8467.patch, SOLR-8467.patch, SOLR-8467.patch, 
> SOLR-8467.patch, SOLR-8647.patch, SOLR-8647.patch
>
>
> Currently, it's impossible to, say, specify multiple "fq" clauses when using 
> Streaming Aggregation due to the fact that the c'tors take a Map of params.
> Opening to discuss whether we should
> 1> deprecate the current c'tor
> and/or
> 2> add a c'tor that takes a SolrParams object instead.
> and/or
> 3> ???
> I don't see a clean way to go from a Map to a 
> (Modifiable)SolrParams, so existing code would need a significant change. I 
> hacked together a PoC, just to see if I could make CloudSolrStream take a 
> ModifiableSolrParams object instead and it passes tests, but it's so bad that 
> I'm not going to even post it. There's _got_ to be a better way to do this, 
> but at least it's possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8996) Add Random Streaming Expression

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273331#comment-15273331
 ] 

ASF subversion and git services commented on SOLR-8996:
---

Commit ff565317621287c174ad42f2af9fdcc7b221eff3 in lucene-solr's branch 
refs/heads/master from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ff56531 ]

SOLR-8996: Greatly decreases the probability of a RandomStream test failure 
from 1 in 5! to 1 in 1000!


> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Dennis Gove
> Fix For: 6.1
>
> Attachments: RandomStream.java, 
> SOLR-8996-decrease-failure-probability.patch, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8996) Add Random Streaming Expression

2016-05-05 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-8996:
--
Attachment: SOLR-8996-decrease-failure-probability.patch

Increases the # of records in the test collection to reduce the probability of 
a failure from 1 in 5! (1 in 120) to 1 in 1000! (1 in basically never).

This still doesn't guarantee a passing test but greatly increases the 
probability of a passing test.

> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Dennis Gove
> Fix For: 6.1
>
> Attachments: RandomStream.java, 
> SOLR-8996-decrease-failure-probability.patch, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8996) Add Random Streaming Expression

2016-05-05 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove reopened SOLR-8996:
---
  Assignee: Dennis Gove  (was: Joel Bernstein)

Re-opening to apply updated test.

> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Dennis Gove
> Fix For: 6.1
>
> Attachments: RandomStream.java, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+116) - Build # 16665 - Still Failing!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16665/
Java: 32bit/jdk-9-ea+116 -client -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers

Error Message:
exception w/handler: '/graph'

Stack Trace:
java.lang.RuntimeException: exception w/handler: '/graph'
at 
__randomizedtesting.SeedInfo.seed([9C2D63DA9E029EEA:7133D8E0150F9F6E]:0)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:130)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:804)
Caused by: java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:785)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:121)
... 39 more
Caused by: org.xml.sax.SAXParseException; 

[jira] [Commented] (SOLR-8996) Add Random Streaming Expression

2016-05-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273315#comment-15273315
 ] 

Joel Bernstein commented on SOLR-8996:
--

Sure, sounds good.

> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: RandomStream.java, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9065) Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase

2016-05-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273314#comment-15273314
 ] 

Joel Bernstein commented on SOLR-9065:
--

No problem, and thanks for your work on the tests. It's a big improvement!

> Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase
> ---
>
> Key: SOLR-9065
> URL: https://issues.apache.org/jira/browse/SOLR-9065
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.1, master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9065.patch
>
>
> AbstractDistribZkTestBase sets up collections using the legacy core-based 
> system, and does a lot of comparing things against control collections that 
> the SolrJ tests really don't require.  We should migrate these tests to using 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8996) Add Random Streaming Expression

2016-05-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273301#comment-15273301
 ] 

Dennis Gove commented on SOLR-8996:
---

[~joel.bernstein], I saw a failure of the test for this stream. Because there 
are only 5 records in the collection during the test I believe there is a 
probability of 1 in 5! that the test will fail (1 in 120) because the two 
streams return the records in the same order. Below is a small patch that 
increases the # of records to 1000 thus decreasing the probability of a failure 
to 1 in 1000! (1 in basically never). Do you think it's worth re-opening this 
and applying the patch?

{code}
diff --git 
a/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/StreamExpressionTest.java
 
b/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/StreamExpressionTest.java
index d273477..267eeca 100644
--- 
a/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/StreamExpressionTest.java
+++ 
b/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/StreamExpressionTest.java
@@ -483,13 +483,12 @@ public class StreamExpressionTest extends 
SolrCloudTestCase {
   @Test
   public void testRandomStream() throws Exception {

-new UpdateRequest()
-.add(id, "0", "a_s", "hello0", "a_i", "0", "a_f", "0")
-.add(id, "2", "a_s", "hello2", "a_i", "2", "a_f", "0")
-.add(id, "3", "a_s", "hello3", "a_i", "3", "a_f", "3")
-.add(id, "4", "a_s", "hello4", "a_i", "4", "a_f", "4")
-.add(id, "1", "a_s", "hello1", "a_i", "1", "a_f", "1")
-.commit(cluster.getSolrClient(), COLLECTION);
+UpdateRequest update = new UpdateRequest();
+for(int idx = 0; idx < 1000; ++idx){
+  String idxString = new Integer(idx).toString();
+  update.add(id,idxString, "a_s", "hello" + idxString, "a_i", idxString, 
"a_f", idxString);
+}
+update.commit(cluster.getSolrClient(), COLLECTION);

 StreamExpression expression;
 TupleStream stream;
@@ -504,17 +503,17 @@ public class StreamExpressionTest extends 
SolrCloudTestCase {
 try {
   context.setSolrClientCache(cache);

-  expression = StreamExpressionParser.parse("random(" + COLLECTION + ", 
q=\"*:*\", rows=\"10\", fl=\"id, a_i\")");
+  expression = StreamExpressionParser.parse("random(" + COLLECTION + ", 
q=\"*:*\", rows=\"1000\", fl=\"id, a_i\")");
   stream = factory.constructStream(expression);
   stream.setStreamContext(context);
   List tuples1 = getTuples(stream);
-  assert (tuples1.size() == 5);
+  assert (tuples1.size() == 1000);

-  expression = StreamExpressionParser.parse("random(" + COLLECTION + ", 
q=\"*:*\", rows=\"10\", fl=\"id, a_i\")");
+  expression = StreamExpressionParser.parse("random(" + COLLECTION + ", 
q=\"*:*\", rows=\"1000\", fl=\"id, a_i\")");
   stream = factory.constructStream(expression);
   stream.setStreamContext(context);
   List tuples2 = getTuples(stream);
-  assert (tuples2.size() == 5);
+  assert (tuples2.size() == 1000);

   boolean different = false;
   for (int i = 0; i < tuples1.size(); i++) {
{code}

> Add Random Streaming Expression
> ---
>
> Key: SOLR-8996
> URL: https://issues.apache.org/jira/browse/SOLR-8996
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: RandomStream.java, SOLR-8996.patch
>
>
> The random Streaming Expression will return a *limited* random stream of 
> Tuples that match a query. This will be useful in many different scenarios 
> where random data sets are needed.
> Proposed syntax:
> {code}
> random(baskets, q="productID:productX", rows="100", fl="basketID") 
> {code}
> The sample code above will query the *baskets* collection and return 100 
> random *basketID's* where the productID is productX.
> The underlying implementation will rely on Solr's random field type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9058) hashJoin does not work when "on" maps fields with "="

2016-05-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273283#comment-15273283
 ] 

Dennis Gove edited comment on SOLR-9058 at 5/5/16 11:09 PM:


This patch fixes up both hashJoin and outerHashJoin and adds tests for the 
scenario to both. 

The approach taken is to have a left and right list of fields to hash on. 
During construction it checks for an = in the field and if found will split 
into a left and right side field name. If not found then it uses that single 
field name for both the left and right side. It then uses those lists when 
reading tuples from the stream.

[~Osthold], good catch on this one and thank you for the test showing the 
failure (I swiped that and included it in the test cast).


was (Author: dpgove):
This patch fixes up both hashJoin and outerHashJoin and adds tests for the 
scenario to both. 

The approach taken is to have a left and right list of fields to hash on. 
During construction it checks for an = in the field and if found will split 
into a left and right side field name. If not found then it uses that single 
field name for both the left and right side. It then uses those lists when 
reading tuples from the stream.

Stephan, good catch on this one and thank you for the test showing the failure 
(I swiped that and included it in the test cast).

> hashJoin does not work when "on" maps fields with "="
> -
>
> Key: SOLR-9058
> URL: https://issues.apache.org/jira/browse/SOLR-9058
> Project: Solr
>  Issue Type: Bug
>Reporter: Stephan Osthold
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-9058.patch
>
>
> hashJoin does not work when "on" maps fields with "="
> eg.
> hashJoin(
>  ...
>  on="field1=field2"
> )
> See link for fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9058) hashJoin does not work when "on" maps fields with "="

2016-05-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273283#comment-15273283
 ] 

Dennis Gove edited comment on SOLR-9058 at 5/5/16 11:09 PM:


This patch fixes up both hashJoin and outerHashJoin and adds tests for the 
scenario to both. 

The approach taken is to have a left and right list of fields to hash on. 
During construction it checks for an = in the field and if found will split 
into a left and right side field name. If not found then it uses that single 
field name for both the left and right side. It then uses those lists when 
reading tuples from the stream.

[~Osthold], good catch on this one and thank you for the test showing the 
failure (I swiped that and included it in the test case).


was (Author: dpgove):
This patch fixes up both hashJoin and outerHashJoin and adds tests for the 
scenario to both. 

The approach taken is to have a left and right list of fields to hash on. 
During construction it checks for an = in the field and if found will split 
into a left and right side field name. If not found then it uses that single 
field name for both the left and right side. It then uses those lists when 
reading tuples from the stream.

[~Osthold], good catch on this one and thank you for the test showing the 
failure (I swiped that and included it in the test cast).

> hashJoin does not work when "on" maps fields with "="
> -
>
> Key: SOLR-9058
> URL: https://issues.apache.org/jira/browse/SOLR-9058
> Project: Solr
>  Issue Type: Bug
>Reporter: Stephan Osthold
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-9058.patch
>
>
> hashJoin does not work when "on" maps fields with "="
> eg.
> hashJoin(
>  ...
>  on="field1=field2"
> )
> See link for fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9058) hashJoin does not work when "on" maps fields with "="

2016-05-05 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273283#comment-15273283
 ] 

Dennis Gove edited comment on SOLR-9058 at 5/5/16 11:08 PM:


This patch fixes up both hashJoin and outerHashJoin and adds tests for the 
scenario to both. 

The approach taken is to have a left and right list of fields to hash on. 
During construction it checks for an = in the field and if found will split 
into a left and right side field name. If not found then it uses that single 
field name for both the left and right side. It then uses those lists when 
reading tuples from the stream.

Stephan, good catch on this one and thank you for the test showing the failure 
(I swiped that and included it in the test cast).


was (Author: dpgove):
This patch fixes up both hashJoin and outerHashJoin and adds tests for the 
scenario to both. 

The approach taken is to have a left and right list of fields to hash on. It 
checks for an = in the field and if found will split into a left and right side 
field name. If not found then it uses that single field name for both the left 
and right side.

> hashJoin does not work when "on" maps fields with "="
> -
>
> Key: SOLR-9058
> URL: https://issues.apache.org/jira/browse/SOLR-9058
> Project: Solr
>  Issue Type: Bug
>Reporter: Stephan Osthold
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-9058.patch
>
>
> hashJoin does not work when "on" maps fields with "="
> eg.
> hashJoin(
>  ...
>  on="field1=field2"
> )
> See link for fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9058) hashJoin does not work when "on" maps fields with "="

2016-05-05 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-9058:
--
Attachment: SOLR-9058.patch

This patch fixes up both hashJoin and outerHashJoin and adds tests for the 
scenario to both. 

The approach taken is to have a left and right list of fields to hash on. It 
checks for an = in the field and if found will split into a left and right side 
field name. If not found then it uses that single field name for both the left 
and right side.

> hashJoin does not work when "on" maps fields with "="
> -
>
> Key: SOLR-9058
> URL: https://issues.apache.org/jira/browse/SOLR-9058
> Project: Solr
>  Issue Type: Bug
>Reporter: Stephan Osthold
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-9058.patch
>
>
> hashJoin does not work when "on" maps fields with "="
> eg.
> hashJoin(
>  ...
>  on="field1=field2"
> )
> See link for fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-05-05 Thread Shikha Somani (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273269#comment-15273269
 ] 

Shikha Somani commented on SOLR-8297:
-

No.
HttpSolrClient was the only way through which join queries can be done in Solr 
4.x. Join query was not supported in cloud mode, it threw exception: 
"Cross-core join: no such core"

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9065) Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase

2016-05-05 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273247#comment-15273247
 ] 

Alan Woodward commented on SOLR-9065:
-

bq. can you please send a message to the dev list to announce your refactoring 
plans/schedule so help us coordinate our efforts? 

Will do so in the next couple of days.  Sorry for stepping on your toes, guys.

> Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase
> ---
>
> Key: SOLR-9065
> URL: https://issues.apache.org/jira/browse/SOLR-9065
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.1, master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9065.patch
>
>
> AbstractDistribZkTestBase sets up collections using the legacy core-based 
> system, and does a lot of comparing things against control collections that 
> the SolrJ tests really don't require.  We should migrate these tests to using 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_92) - Build # 575 - Still Failing!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/575/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers

Error Message:
exception w/handler: '/graph'

Stack Trace:
java.lang.RuntimeException: exception w/handler: '/graph'
at 
__randomizedtesting.SeedInfo.seed([474E911E2550F381:AA502A24AE5DF205]:0)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Exception during query
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:786)
at 
org.apache.solr.MinimalSchemaTest.testAllConfiguredHandlers(MinimalSchemaTest.java:121)
... 39 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; 
Content is not allowed in prolog.
at 

[jira] [Commented] (SOLR-8323) Add CollectionWatcher API to ZkStateReader

2016-05-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273176#comment-15273176
 ] 

ASF GitHub Bot commented on SOLR-8323:
--

Github user markrmiller commented on the pull request:

https://github.com/apache/lucene-solr/pull/32#issuecomment-217291207
  
> Although, thinking more about that, we already have a separate executor 
for watchers, don't we?

Yes, every watch firing event should run from a dedicated executor rather 
than using ZK's event thread. I have not dug in enough here to know it covers 
what you guys are talking about, but holding up a Watcher thread should no 
longer interfere with ZK clients internal event thread.


> Add CollectionWatcher API to ZkStateReader
> --
>
> Key: SOLR-8323
> URL: https://issues.apache.org/jira/browse/SOLR-8323
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-8323.patch, SOLR-8323.patch, SOLR-8323.patch, 
> SOLR-8323.patch
>
>
> An API to watch for changes to collection state would be a generally useful 
> thing, both internally and for client use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8323

2016-05-05 Thread markrmiller
Github user markrmiller commented on the pull request:

https://github.com/apache/lucene-solr/pull/32#issuecomment-217291207
  
> Although, thinking more about that, we already have a separate executor 
for watchers, don't we?

Yes, every watch firing event should run from a dedicated executor rather 
than using ZK's event thread. I have not dug in enough here to know it covers 
what you guys are talking about, but holding up a Watcher thread should no 
longer interfere with ZK clients internal event thread.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8208) DocTransformer executes sub-queries

2016-05-05 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8208:
---
Attachment: SOLR-8208.patch

* Good news! [~caomanhdat] your approach laid quite well!  For you know sake we 
have a backdoor to suspend SolrRequestInfo. I wonder if it legal enough? 
* threads were removed (I'll comment about a pain, which those who want to get, 
can get with it). 
* added a few tests proving that \[subquery] is on par with \[child]
* moved tests in a subpackage
* one question about code style: the core class (300 LOC) is compiled into more 
than five classes, doesn't it deserves a separate 
{{o.a.s.response.transform.subquery}} ?

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.diff, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call it sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can specify subquery parameter prefix:
> {code}
> ..=name_s:john=*,depts:[subquery fromIndex=departments]&
> depts.q={!term f=dept_id_s 
> v=$row.dept_ss_dv}=text_t,dept_id_s_dv=12=id 
> desc
> {code}   
> response is like
> {code}   
> 
> ...
> 
> 
> 1
> john
> ..
> 
> 
> Engineering
> These guys develop stuff
> 
> 
> Support
> These guys help users
> 
> 
> 
> 
> 
> {code}   
> * {{fl=depts:\[subquery]}} executes a separate request for every query result 
> row, and adds it into a document as a separate result list. The given field 
> name (here it's 'depts') is used as a prefix to shift subquery parameters 
> from main query parameter, eg {{depts.q}} turns to {{q}} for subquery, 
> {{depts.rows}} to {{rows}}.
> * document fields are available as implicit parameters with prefix {{row.}} 
> eg. if result document has a field {{dept_id}} it can be referred as 
> {{v=$row.dept_id}} this combines well with \{!terms} query parser   
> * {{separator=','}} is used when multiple field values are combined in 
> parameter. eg. a document has multivalue field {code}dept_ids={2,3}{code}, 
> thus referring to it via {code}..={!terms f=id 
> v=$row.dept_ids}&..{code} executes a subquery {code}{!terms f=id}2,3{code}. 
> When omitted  it's a comma. 
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> However, it doesn't work on cloud setup (and will let you know), but it's 
> proposed to use regular params ({{collection}}, {{shards}} - whatever, with 
> subquery prefix as below ) to issue subquery to a collection
> {code}
> q=name_s:dave=true=*,depts:[subquery]=20&
> depts.q={!terms f=dept_id_s v=$row.dept_ss_dv}=text_t&
> depts.indent=true&
> depts.collection=departments&
> depts.rows=10=q,fl,rows,row.dept_ss_dv
> {code}
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_92) - Build # 157 - Still Failing!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/157/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 8 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, TransactionLog, TransactionLog, 
MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 8 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper, TransactionLog, 
TransactionLog, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
MockDirectoryWrapper, MDCAwareThreadPoolExecutor]
at __randomizedtesting.SeedInfo.seed([18545C88E3F8E230]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_18545C88E3F8E230-001\tempDir-001\node1\testschemaapi_shard1_replica2\data\tlog\tlog.000:
 java.nio.file.FileSystemException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_18545C88E3F8E230-001\tempDir-001\node1\testschemaapi_shard1_replica2\data\tlog\tlog.000:
 The process cannot access the file because it is being used by another 
process. 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_18545C88E3F8E230-001\tempDir-001\node1\testschemaapi_shard1_replica2\data\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_18545C88E3F8E230-001\tempDir-001\node1\testschemaapi_shard1_replica2\data\tlog

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_18545C88E3F8E230-001\tempDir-001\node1\testschemaapi_shard1_replica2\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_18545C88E3F8E230-001\tempDir-001\node1\testschemaapi_shard1_replica2\data


[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2

2016-05-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273162#comment-15273162
 ] 

Mark Miller commented on SOLR-9076:
---

I hit SOLR-7115 for some reason trying to get this to work. A workaround is 
currently included in this patch.

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9045) configurable RecoveryStrategy support

2016-05-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273158#comment-15273158
 ] 

Mark Miller commented on SOLR-9045:
---

Could be DataSynchronizationRunner or something too, but given it's the prime 
driver of the recovery status, I kind of like recovery in the name. Of course, 
tlog replay should probably be under recovery too, and that is very distinct 
from this class, so perhaps it's better it doesn't try and synergize with the 
recovery key word, as it really just may be a piece of the recovery puzzle 
longer term.

> configurable RecoveryStrategy support 
> --
>
> Key: SOLR-9045
> URL: https://issues.apache.org/jira/browse/SOLR-9045
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
>
> objectives:
> * To allow users to change RecoveryStrategy settings such as maxRetries and 
> startingRecoveryDelay.
> * To support configuration of a custom recovery strategy.
> illustrative solrconfig.xml snippet:
> {code}
> 
>   250 
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7271) Cleanup jira's concept of 'master' and '6.0'

2016-05-05 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7271:
-
Description: 
Jira's concept of "Fix Version: master" is currently screwed up, as noted & 
discussed in this mailing list thread...

http://mail-archives.apache.org/mod_mbox/lucene-dev/201604.mbox/%3Calpine.DEB.2.11.1604131529140.15570@tray%3E

The current best plan of attack (summary) is:
* use Jira's "Merge Versions" feature to merge {{master}} into {{6.0}}
* add a new {{master (7.0)}} to use moving forward
* manually audit/fix the fixVersion of some clean up issues as needed.

I'm using this issue to track this work.



Detailed Check list of planned steps:

* S1: Generate a CSV report listing all resolved/closed Jira's with 
'fixVersion=master AND fixVersion=6.1'
** 
https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20status%20in%20%28Resolved%2C%20Closed%29%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20%3D%206.1%20ORDER%20BY%20resolved%20DESC%2C%20key%20DESC
*** currently about ~100 issues
** The operating assumption is that any non-resolved issues should have the 
fixVersion set correctly if/when they are resolved in the future
* S2: Use Jira's "Bulk Edit" feature to add comments *W/O SENDING EMAIL* to 
every issue currently assocaited with fixVersion=6.0 or fixVersio=master
** The comments will include unique strings based on the the specific query 
done, and will map the list back to this issue (ex 
{{LUCENE-7271_20160503_master}} and {{LUCENE-7271_20160503_60}})
** These comments will serve as a backup plan making it possible to find all 
issues affected (by merging jira's concepts of 'master' and '6.0') after the 
fact if need be.
** Queries to use for bulk edits:
*** master: 
https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20fixVersion%20%3D%20master%20ORDER%20BY%20key%20DESC
*** 6.0: 
https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20fixVersion%20%3D%206.0%20ORDER%20BY%20key%20DESC
* S3: Use Jira's "Merge Versions" feature to merge "master" into "6.0"
** This needs to be done distinctly for both LUCENE and SOLR
* S4: Add a new "master (7.0)" version to Jira
** This needs to be done distinctly for both LUCENE and SOLR
* S5: audit every issue in the CSV file from S1 above to determine if/when the 
issue should get fixVersion="master (7.0)" *added* to it or fixVersion="6.0" 
*removed* from it:
** if it only ever had commits to master (ie: before branch_6x was made on 
March 2nd) then no action needed.
** if it has distinct commits to both master after branch_6x was made on March 
2nd, then fixVersion="master (7.0)" should definitely be added.
** if it has no commits to branch_6_0, and the only commits to branch_6x are 
after branch_6_0 was created on March 3rd, then fixVersion="6.0" should be 
removed.
** if there are no obvious commits in the issue comments, then sanity check why 
it has any fixVersion at all (can't reproduce? dup? etc...)
* S6: Audit CHANGES.txt & git commits and *replace* fixVersion=6.0 with 
fixVersion="master (7.0)" on the handful of issues where appropriate in case 
they fell through the cracks in S5:
** Look at the 7.0 section of lucene/CHANGES.txt & solr/CHANGES.txt for new 
features
*** currently only 1 jira mentioned in either files in 7.0 section
** Use {{git co releases/lucene-solr/6.0.0 && git cherry -v master | egrep 
'^\+'}} to find changesets on master that were not included in 6.0
*** currently ~40 commits
** before removing fixVersion=6.0 from any of these issues, sanity check if 
this is a deprecation type situation (involving diff commits in both 6.0 and 
master) in which case fixVersion="master (7.0)" should be _added_ in addition 
to fixVersion=6.0




  was:
Jira's concept of "Fix Version: master" is currently screwed up, as noted & 
discussed in this mailing list thread...

http://mail-archives.apache.org/mod_mbox/lucene-dev/201604.mbox/%3Calpine.DEB.2.11.1604131529140.15570@tray%3E

The current best plan of attack (summary) is:
* use Jira's "Merge Versions" feature to merge {{master}} into {{6.0}}
* add a new {{master (7.0}}} to use moving forward
* manually audit/fix the fixVersion of some clean up issues as needed.

I'm using this issue to track this work.



Detailed Check list of planned steps:

* S1: Generate a CSV report listing all resolved/closed Jira's with 
'fixVersion=master AND fixVersion=6.1'
** 
https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20status%20in%20%28Resolved%2C%20Closed%29%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20%3D%206.1%20ORDER%20BY%20resolved%20DESC%2C%20key%20DESC
*** currently about ~100 issues
** The operating assumption is that any non-resolved issues should have the 
fixVersion set correctly if/when they are resolved in the future
* S2: Use Jira's 

[jira] [Commented] (SOLR-9045) configurable RecoveryStrategy support

2016-05-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273145#comment-15273145
 ] 

Mark Miller commented on SOLR-9045:
---

bq. How about renaming RecoveryStrategy to RecoveryImplementation (or something 
like it)?

Trying to think of what this actually is, and currently I'm around 
DataRecoveryRunner or something.

> configurable RecoveryStrategy support 
> --
>
> Key: SOLR-9045
> URL: https://issues.apache.org/jira/browse/SOLR-9045
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
>
> objectives:
> * To allow users to change RecoveryStrategy settings such as maxRetries and 
> startingRecoveryDelay.
> * To support configuration of a custom recovery strategy.
> illustrative solrconfig.xml snippet:
> {code}
> 
>   250 
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9076) Update to Hadoop 2.7.2

2016-05-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-9076:
--
Attachment: SOLR-9076.patch

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7115) UpdateLog can miss closing transaction log objects.

2016-05-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273103#comment-15273103
 ] 

Mark Miller commented on SOLR-7115:
---

I see this consistently trying to update to hadoop 2.7.2.

I traced in and it seems like the problem is that on close, the following call 
in commit can hang and keep the post commit updatelog code from being called. 
Oddly, this hang doesn't seem to last because the test does not complain about 
the thread remaining.

{code}
  RefCounted searchHolder = 
core.openNewSearcher(true, true);
  searchHolder.decref();
{code}

Checking if the core is closed before calling that seems to help.

> UpdateLog can miss closing transaction log objects.
> ---
>
> Key: SOLR-7115
> URL: https://issues.apache.org/jira/browse/SOLR-7115
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>
> I've seen this happen on YourKit and in various tests - especially since 
> adding resource release tracking to the log objects. Now I've got a test that 
> catches it in SOLR-7113.
> It seems that in precommit, if prevTlog is not null, we need to close it 
> because we are going to overwrite prevTlog with a new log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.7.0_80) - Build # 266 - Still Failing!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/266/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:36313/solr/testschemaapi_shard1_replica1: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:36313/solr/testschemaapi_shard1_replica1: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([91AE8EA7EC777680:19FAB17D428B1B78]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:632)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:981)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:101)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-8972) Add GraphHandler and GraphMLResponseWriter to support graph visualizations

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273083#comment-15273083
 ] 

ASF subversion and git services commented on SOLR-8972:
---

Commit 928a3cf268b00e9589238adf08848d8eee7c83c0 in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=928a3cf ]

SOLR-8972: Add GraphHandler and GraphMLResponseWriter to support graph 
visualizations


> Add GraphHandler and GraphMLResponseWriter to support graph visualizations
> --
>
> Key: SOLR-8972
> URL: https://issues.apache.org/jira/browse/SOLR-8972
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: GraphHandler.java, GraphMLResponseWriter.java, 
> SOLR-8972.patch, SOLR-8972.patch, SOLR-8972.patch
>
>
> SOLR-8925 is shaping up nicely. It would be great if Solr could support 
> outputting graphs in GraphML. This will allow users to visualize their graphs 
> in a number of graph visualization tools (NodeXL, Gephi, Tulip etc...). This 
> ticket will create a new Graph handler which will take a Streaming Expression 
> graph traversal and output GraphML. A new GraphMLResponseWriter will handle 
> the GraphML formatting. In future releases we can consider supporting other 
> graph formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8972) Add GraphHandler and GraphMLResponseWriter to support graph visualizations

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273084#comment-15273084
 ] 

ASF subversion and git services commented on SOLR-8972:
---

Commit 8b56f67adb4192793795b6351ef021cb5a4149ac in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8b56f67 ]

SOLR-8972: Update CHANGES.txt


> Add GraphHandler and GraphMLResponseWriter to support graph visualizations
> --
>
> Key: SOLR-8972
> URL: https://issues.apache.org/jira/browse/SOLR-8972
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: GraphHandler.java, GraphMLResponseWriter.java, 
> SOLR-8972.patch, SOLR-8972.patch, SOLR-8972.patch
>
>
> SOLR-8925 is shaping up nicely. It would be great if Solr could support 
> outputting graphs in GraphML. This will allow users to visualize their graphs 
> in a number of graph visualization tools (NodeXL, Gephi, Tulip etc...). This 
> ticket will create a new Graph handler which will take a Streaming Expression 
> graph traversal and output GraphML. A new GraphMLResponseWriter will handle 
> the GraphML formatting. In future releases we can consider supporting other 
> graph formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9036) Solr slave is doing full replication (entire index) of index after master restart

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273079#comment-15273079
 ] 

ASF subversion and git services commented on SOLR-9036:
---

Commit a6f9c8e171b8f48d5ced9c74b41f875aef567634 in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a6f9c8e ]

SOLR-9036: Disable doTestIndexFetchOnMasterRestart
(cherry picked from commit 1dd8775)


> Solr slave is doing full replication (entire index) of index after master 
> restart
> -
>
> Key: SOLR-9036
> URL: https://issues.apache.org/jira/browse/SOLR-9036
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 5.3.1, 6.0
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
>  Labels: impact-high
> Fix For: 6.1, master
>
> Attachments: SOLR-9036.patch, SOLR-9036.patch, SOLR-9036.patch
>
>
> This was first described in the following email:
> https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201604.mbox/%3ccafgnfoyn+xmpxwzwbjuzddeuz7tjqhqktek6q7u8xgstqy3...@mail.gmail.com%3E
> I tried Solr 5.3.1 and Solr 6 and I can reproduce the problem. If the master 
> comes back online before the next polling interval then the slave finds 
> itself in sync with the master but if the master is down for at least one 
> polling interval then the slave pulls the entire full index from the master 
> even if the index has not changed on the master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9036) Solr slave is doing full replication (entire index) of index after master restart

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273077#comment-15273077
 ] 

ASF subversion and git services commented on SOLR-9036:
---

Commit 1dd877545fad0eae7be43fec109bceb4617fb6a4 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1dd8775 ]

SOLR-9036: Disable doTestIndexFetchOnMasterRestart


> Solr slave is doing full replication (entire index) of index after master 
> restart
> -
>
> Key: SOLR-9036
> URL: https://issues.apache.org/jira/browse/SOLR-9036
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 5.3.1, 6.0
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
>  Labels: impact-high
> Fix For: 6.1, master
>
> Attachments: SOLR-9036.patch, SOLR-9036.patch, SOLR-9036.patch
>
>
> This was first described in the following email:
> https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201604.mbox/%3ccafgnfoyn+xmpxwzwbjuzddeuz7tjqhqktek6q7u8xgstqy3...@mail.gmail.com%3E
> I tried Solr 5.3.1 and Solr 6 and I can reproduce the problem. If the master 
> comes back online before the next polling interval then the slave finds 
> itself in sync with the master but if the master is down for at least one 
> polling interval then the slave pulls the entire full index from the master 
> even if the index has not changed on the master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1123 - Still Failing

2016-05-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1123/

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:41297/solr/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:41297/solr/collection1
at 
__randomizedtesting.SeedInfo.seed([DFBBB77B86D1951E:74C739F2D0A5742]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:620)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:152)
at 
org.apache.solr.handler.TestReplicationHandler.index(TestReplicationHandler.java:176)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:609)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Reopened] (SOLR-9036) Solr slave is doing full replication (entire index) of index after master restart

2016-05-05 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reopened SOLR-9036:
-

Re-opening because the new test added here fails often on slow machines.

> Solr slave is doing full replication (entire index) of index after master 
> restart
> -
>
> Key: SOLR-9036
> URL: https://issues.apache.org/jira/browse/SOLR-9036
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 5.3.1, 6.0
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
>  Labels: impact-high
> Fix For: 6.1, master
>
> Attachments: SOLR-9036.patch, SOLR-9036.patch, SOLR-9036.patch
>
>
> This was first described in the following email:
> https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201604.mbox/%3ccafgnfoyn+xmpxwzwbjuzddeuz7tjqhqktek6q7u8xgstqy3...@mail.gmail.com%3E
> I tried Solr 5.3.1 and Solr 6 and I can reproduce the problem. If the master 
> comes back online before the next polling interval then the slave finds 
> itself in sync with the master but if the master is down for at least one 
> polling interval then the slave pulls the entire full index from the master 
> even if the index has not changed on the master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9063) CloudSolrClient with _route_ shouldn't require collection param to disambig cores

2016-05-05 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-9063.
--
Resolution: Not A Problem

> CloudSolrClient with _route_ shouldn't require collection param to disambig 
> cores
> -
>
> Key: SOLR-9063
> URL: https://issues.apache.org/jira/browse/SOLR-9063
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, SolrJ
>Affects Versions: 4.10.4
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_9063.patch
>
>
> CloudSolrClient uses {{\_route\_}} to know where to send a request  It sorta 
> works -- it'll go to an appropriate _node_.  But it will only go to the 
> correct core on that node if the {{collection}} parameter is explicitly 
> added.  In another words, it ignores the default collection configured on 
> CloudSolrClient.  It also seems to ignore "collection" parameter to the 
> protected method sendRequest for this purpose too.  As I write this, see line 
> 1139 on master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9063) CloudSolrClient with _route_ shouldn't require collection param to disambig cores

2016-05-05 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-9063:
---
Attachment: SOLR_9063.patch

Testing revealed at least one issue: It's insufficient for the condition to be 
simply {{collectionNames.size() > 1}} because the collection String might 
actually be a comma delimited list.  So that brings me to: {{if 
(collectionNames.size() > 1 && reqParams.get(UpdateParams.COLLECTION) == 
null)}}.  ...

Then StressHdfsTest failed reproducibly with seed A8BBAE62E21BB966.  There was 
some other failure but it didn't reproduce/happen again.  The failure is Jetty 
returning an HTML page of 404 status trying to reach a specific core URL.  
Here's the trace:

{noformat}
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:574)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:366)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1204)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:965)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:901)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at 
org.apache.solr.cloud.hdfs.StressHdfsTest.createAndDeleteCollection(StressHdfsTest.java:208)
at 
org.apache.solr.cloud.hdfs.StressHdfsTest.test(StressHdfsTest.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
{noformat}
I have suspicions it's an issue with the test but I'm not sure.  I don't have 
time to debug this one further, and as this isn't pressing I think I'll move on 
from this issue for now.

Stepping back a bit, might it make more sense to always go to the collection 
level URL at an appropriate node?  Kinda the opposite of what I've been trying 
to do.  That would be consistent, which is nice.  But then ideally, to retain 
some of the direct routing going on here, HttpSolrCall would have to gain the 
ability to dispatch based on {{\_route\_}}.  That sounds like a better path, 
actually, although the thought of it sheds more light on duplicated routing 
logic for different contexts: CloudSolrClient, HttpSolrCall, HttpShardHandler.  
Maybe elsewhere too.  :-/

> CloudSolrClient with _route_ shouldn't require collection param to disambig 
> cores
> -
>
> Key: SOLR-9063
> URL: https://issues.apache.org/jira/browse/SOLR-9063
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, SolrJ
>Affects Versions: 4.10.4
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_9063.patch
>
>
> CloudSolrClient uses {{\_route\_}} to know where to send a request  It sorta 
> works -- it'll go to an appropriate _node_.  But it will only go to the 
> correct core on that node if the {{collection}} parameter is explicitly 
> added.  In another words, it ignores the default collection configured on 
> CloudSolrClient.  It also seems to ignore "collection" parameter to the 
> protected method sendRequest for this purpose too.  As I write this, see line 
> 1139 on master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8972) Add GraphHandler and GraphMLResponseWriter to support graph visualizations

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273031#comment-15273031
 ] 

ASF subversion and git services commented on SOLR-8972:
---

Commit 02eef8dffac0d9f3cda86bb3834a6cc769962ffb in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=02eef8d ]

SOLR-8972: Update CHANGES.txt


> Add GraphHandler and GraphMLResponseWriter to support graph visualizations
> --
>
> Key: SOLR-8972
> URL: https://issues.apache.org/jira/browse/SOLR-8972
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: GraphHandler.java, GraphMLResponseWriter.java, 
> SOLR-8972.patch, SOLR-8972.patch, SOLR-8972.patch
>
>
> SOLR-8925 is shaping up nicely. It would be great if Solr could support 
> outputting graphs in GraphML. This will allow users to visualize their graphs 
> in a number of graph visualization tools (NodeXL, Gephi, Tulip etc...). This 
> ticket will create a new Graph handler which will take a Streaming Expression 
> graph traversal and output GraphML. A new GraphMLResponseWriter will handle 
> the GraphML formatting. In future releases we can consider supporting other 
> graph formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-05-05 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273030#comment-15273030
 ] 

Mikhail Khludnev commented on SOLR-8297:


did you use SolrCloud just to replicate single shard collection across a few 
boxes? but then hit one of these nodes with HttpSolrClient? 

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273023#comment-15273023
 ] 

ASF subversion and git services commented on LUCENE-7241:
-

Commit 3a6587708fbfb73529d7b72c491302f1616f4880 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3a65877 ]

LUCENE-7241: Get rid of allocation for vector that we don't need.


> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: https://issues.apache.org/jira/browse/LUCENE-7241
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: master, 6.x
>
> Attachments: LUCENE-7241.patch, LUCENE-7241.patch, LUCENE-7241.patch, 
> LUCENE-7241.patch, LUCENE-7241.patch
>
>
> This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.
> The trick here is to organize edges by some criteria, e.g. z value range, and 
> use that to avoid needing to go through all edges and/or tile large irregular 
> polygons.  Then we use the ability to quickly determine intersections to 
> figure out whether a point is within the polygon, or not.
> The current way geo3d polygons are constructed involves finding a single 
> point, or "pole", which all polygon points circle.  This point is known to be 
> either "in" or "out" based on the direction of the points.  So we have one 
> place of "truth" on the globe that is known at polygon setup time.
> If edges are organized by z value, where the z values for an edge are 
> computed by the standard way of computing bounds for a plane, then we can 
> readily organize edges into a tree structure such that it is easy to find all 
> edges we need to check for a given z value.  Then, we merely need to compute 
> how many intersections to consider as we navigate from the "truth" point to 
> the point being tested.  In practice, this means both having a tree that is 
> organized by z, and a tree organized by (x,y), since we need to navigate in 
> both directions.  But then we can cheaply count the number of intersections, 
> and once we do that, we know whether our point is "in" or "out".
> The other performance improvement we need is whether a given plane intersects 
> the polygon within provided bounds.  This can be done using the same two 
> trees (z and (x,y)), by virtue of picking which tree to use based on the 
> plane's minimum bounds in z or (x,y).  And, in practice, we might well use 
> three trees: one in x, one in y, and one in z, which would mean we didn't 
> have to compute longitudes ever.
> An implementation like this trades off the cost of finding point membership 
> in near O\(log\(n)) time vs. the extra expense per step of finding that 
> membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) 
> in the current implementation, but once again each individual step is more 
> expensive.  Therefore I would expect we'd want to use the current 
> implementation for simpler polygons and this sort of implementation for 
> tougher polygons.  Choosing which to use is a topic for another ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273021#comment-15273021
 ] 

ASF subversion and git services commented on LUCENE-7241:
-

Commit 6c6667e60e87ca2bec85df859975912009672476 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6c6667e ]

LUCENE-7241: Get rid of allocation for vector that we don't need.


> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: https://issues.apache.org/jira/browse/LUCENE-7241
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: master, 6.x
>
> Attachments: LUCENE-7241.patch, LUCENE-7241.patch, LUCENE-7241.patch, 
> LUCENE-7241.patch, LUCENE-7241.patch
>
>
> This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.
> The trick here is to organize edges by some criteria, e.g. z value range, and 
> use that to avoid needing to go through all edges and/or tile large irregular 
> polygons.  Then we use the ability to quickly determine intersections to 
> figure out whether a point is within the polygon, or not.
> The current way geo3d polygons are constructed involves finding a single 
> point, or "pole", which all polygon points circle.  This point is known to be 
> either "in" or "out" based on the direction of the points.  So we have one 
> place of "truth" on the globe that is known at polygon setup time.
> If edges are organized by z value, where the z values for an edge are 
> computed by the standard way of computing bounds for a plane, then we can 
> readily organize edges into a tree structure such that it is easy to find all 
> edges we need to check for a given z value.  Then, we merely need to compute 
> how many intersections to consider as we navigate from the "truth" point to 
> the point being tested.  In practice, this means both having a tree that is 
> organized by z, and a tree organized by (x,y), since we need to navigate in 
> both directions.  But then we can cheaply count the number of intersections, 
> and once we do that, we know whether our point is "in" or "out".
> The other performance improvement we need is whether a given plane intersects 
> the polygon within provided bounds.  This can be done using the same two 
> trees (z and (x,y)), by virtue of picking which tree to use based on the 
> plane's minimum bounds in z or (x,y).  And, in practice, we might well use 
> three trees: one in x, one in y, and one in z, which would mean we didn't 
> have to compute longitudes ever.
> An implementation like this trades off the cost of finding point membership 
> in near O\(log\(n)) time vs. the extra expense per step of finding that 
> membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) 
> in the current implementation, but once again each individual step is more 
> expensive.  Therefore I would expect we'd want to use the current 
> implementation for simpler polygons and this sort of implementation for 
> tougher polygons.  Choosing which to use is a topic for another ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8972) Add GraphHandler and GraphMLResponseWriter to support graph visualizations

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273011#comment-15273011
 ] 

ASF subversion and git services commented on SOLR-8972:
---

Commit be1cb9a1cde4dd426305f22620734d018f21dd82 in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=be1cb9a ]

SOLR-8972: Add GraphHandler and GraphMLResponseWriter to support graph 
visualizations


> Add GraphHandler and GraphMLResponseWriter to support graph visualizations
> --
>
> Key: SOLR-8972
> URL: https://issues.apache.org/jira/browse/SOLR-8972
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: GraphHandler.java, GraphMLResponseWriter.java, 
> SOLR-8972.patch, SOLR-8972.patch, SOLR-8972.patch
>
>
> SOLR-8925 is shaping up nicely. It would be great if Solr could support 
> outputting graphs in GraphML. This will allow users to visualize their graphs 
> in a number of graph visualization tools (NodeXL, Gephi, Tulip etc...). This 
> ticket will create a new Graph handler which will take a Streaming Expression 
> graph traversal and output GraphML. A new GraphMLResponseWriter will handle 
> the GraphML formatting. In future releases we can consider supporting other 
> graph formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_92) - Build # 5820 - Failure!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5820/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:51195/solr/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:51195/solr/collection1
at 
__randomizedtesting.SeedInfo.seed([B5F54CB6F816554A:6D02885253CD9716]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:620)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:152)
at 
org.apache.solr.handler.TestReplicationHandler.index(TestReplicationHandler.java:176)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:609)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (LUCENE-7271) Cleanup jira's concept of 'master' and '6.0'

2016-05-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273007#comment-15273007
 ] 

Michael McCandless commented on LUCENE-7271:


Thank you [~hossman]!

> Cleanup jira's concept of 'master' and '6.0'
> 
>
> Key: LUCENE-7271
> URL: https://issues.apache.org/jira/browse/LUCENE-7271
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
>
> Jira's concept of "Fix Version: master" is currently screwed up, as noted & 
> discussed in this mailing list thread...
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201604.mbox/%3Calpine.DEB.2.11.1604131529140.15570@tray%3E
> The current best plan of attack (summary) is:
> * use Jira's "Merge Versions" feature to merge {{master}} into {{6.0}}
> * add a new {{master (7.0}}} to use moving forward
> * manually audit/fix the fixVersion of some clean up issues as needed.
> I'm using this issue to track this work.
> 
> Detailed Check list of planned steps:
> * S1: Generate a CSV report listing all resolved/closed Jira's with 
> 'fixVersion=master AND fixVersion=6.1'
> ** 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20status%20in%20%28Resolved%2C%20Closed%29%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20%3D%206.1%20ORDER%20BY%20resolved%20DESC%2C%20key%20DESC
> *** currently about ~100 issues
> ** The operating assumption is that any non-resolved issues should have the 
> fixVersion set correctly if/when they are resolved in the future
> * S2: Use Jira's "Bulk Edit" feature to add comments *W/O SENDING EMAIL* to 
> every issue currently assocaited with fixVersion=6.0 or fixVersio=master
> ** The comments will include unique strings based on the the specific query 
> done, and will map the list back to this issue (ex 
> {{LUCENE-7271_20160503_master}} and {{LUCENE-7271_20160503_60}})
> ** These comments will serve as a backup plan making it possible to find all 
> issues affected (by merging jira's concepts of 'master' and '6.0') after the 
> fact if need be.
> ** Queries to use for bulk edits:
> *** master: 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20fixVersion%20%3D%20master%20ORDER%20BY%20key%20DESC
> *** 6.0: 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20fixVersion%20%3D%206.0%20ORDER%20BY%20key%20DESC
> * S3: Use Jira's "Merge Versions" feature to merge "master" into "6.0"
> ** This needs to be done distinctly for both LUCENE and SOLR
> * S4: Add a new "master (7.0)" version to Jira
> ** This needs to be done distinctly for both LUCENE and SOLR
> * S5: audit every issue in the CSV file from S1 above to determine if the 
> issue should get fixVersion="master (7.0)" *added* to it
> ** if it has distinct commits to both master & branch_6x then 
> fixVersion="master (7.0)" should be added
> ** if it only ever had commits to master (ie: before branch_6x was made) then 
> no action needed
> ** if there are no obvious commits in the issue comments, then sanity check 
> why it has any fixVersion at all (can't reproduce? dup? etc...)
> * S6: Audit CHANGES.txt & git commits and *replace* fixVersion=6.0 with 
> fixVersion="master (7.0)" on the handful of issues where appropraite:
> ** Look at the 7.0 section of lucene/CHANGES.txt & solr/CHANGES.txt for new 
> features
> *** currently only 1 jira mentioned in either files in 7.0 section
> ** Use {{git co releases/lucene-solr/6.0.0 && git cherry -v master | egrep 
> '^\+'}} to find changesets on master that were not included in 6.0
> *** currently ~40 commits
> ** before removing fixVersion=6.0 from any of these issues, sanity check if 
> this is a deprecation type situation (involving diff commits in both 6.0 and 
> master) in which case fixVersion="master (7.0)" should be _added_ in addition 
> to fixVersion=6.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9065) Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase

2016-05-05 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273001#comment-15273001
 ] 

David Smiley commented on SOLR-9065:


I think there may be a misunderstanding of what Joel and I mean by a "shout 
out".  We don't mean a review of a patch (yes, that would then be RTC).  We 
mean a helpful and courteous notice to the dev list (not a comment on an 
issue).  This is very much like what Hoss is doing with the whole JIRA 
master/6.0 versioning issue.  Both this issue and that one have the potential 
for a wide impact which is why we suggest a "shout out" first.  I'm relaxed; I 
hope you feel relaxed too and Alan and Joel :-)

> Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase
> ---
>
> Key: SOLR-9065
> URL: https://issues.apache.org/jira/browse/SOLR-9065
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.1, master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9065.patch
>
>
> AbstractDistribZkTestBase sets up collections using the legacy core-based 
> system, and does a lot of comparing things against control collections that 
> the SolrJ tests really don't require.  We should migrate these tests to using 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272997#comment-15272997
 ] 

ASF subversion and git services commented on LUCENE-7241:
-

Commit 88f70ac2146f0c113dae3d375fc19da75f136ec3 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=88f70ac ]

LUCENE-7241: More performance improvements


> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: https://issues.apache.org/jira/browse/LUCENE-7241
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: master, 6.x
>
> Attachments: LUCENE-7241.patch, LUCENE-7241.patch, LUCENE-7241.patch, 
> LUCENE-7241.patch, LUCENE-7241.patch
>
>
> This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.
> The trick here is to organize edges by some criteria, e.g. z value range, and 
> use that to avoid needing to go through all edges and/or tile large irregular 
> polygons.  Then we use the ability to quickly determine intersections to 
> figure out whether a point is within the polygon, or not.
> The current way geo3d polygons are constructed involves finding a single 
> point, or "pole", which all polygon points circle.  This point is known to be 
> either "in" or "out" based on the direction of the points.  So we have one 
> place of "truth" on the globe that is known at polygon setup time.
> If edges are organized by z value, where the z values for an edge are 
> computed by the standard way of computing bounds for a plane, then we can 
> readily organize edges into a tree structure such that it is easy to find all 
> edges we need to check for a given z value.  Then, we merely need to compute 
> how many intersections to consider as we navigate from the "truth" point to 
> the point being tested.  In practice, this means both having a tree that is 
> organized by z, and a tree organized by (x,y), since we need to navigate in 
> both directions.  But then we can cheaply count the number of intersections, 
> and once we do that, we know whether our point is "in" or "out".
> The other performance improvement we need is whether a given plane intersects 
> the polygon within provided bounds.  This can be done using the same two 
> trees (z and (x,y)), by virtue of picking which tree to use based on the 
> plane's minimum bounds in z or (x,y).  And, in practice, we might well use 
> three trees: one in x, one in y, and one in z, which would mean we didn't 
> have to compute longitudes ever.
> An implementation like this trades off the cost of finding point membership 
> in near O\(log\(n)) time vs. the extra expense per step of finding that 
> membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) 
> in the current implementation, but once again each individual step is more 
> expensive.  Therefore I would expect we'd want to use the current 
> implementation for simpler polygons and this sort of implementation for 
> tougher polygons.  Choosing which to use is a topic for another ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7271) Cleanup jira's concept of 'master' and '6.0'

2016-05-05 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272996#comment-15272996
 ] 

Anshum Gupta commented on LUCENE-7271:
--

LGTM Hoss. There's on thing that I don't have clarity on though. 

bq. S5: if it only ever had commits to master (ie: before branch_6x was made) 
then no action needed
How would this be ever true, considering that the list that you'd generate 
would have fix version as master AND 6.1?

I would like to help but I'm not sure if I'd be available as I'm traveling for 
Apache Big Data tomorrow. I'll try and sync up on IRC if I can help.

> Cleanup jira's concept of 'master' and '6.0'
> 
>
> Key: LUCENE-7271
> URL: https://issues.apache.org/jira/browse/LUCENE-7271
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
>
> Jira's concept of "Fix Version: master" is currently screwed up, as noted & 
> discussed in this mailing list thread...
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201604.mbox/%3Calpine.DEB.2.11.1604131529140.15570@tray%3E
> The current best plan of attack (summary) is:
> * use Jira's "Merge Versions" feature to merge {{master}} into {{6.0}}
> * add a new {{master (7.0}}} to use moving forward
> * manually audit/fix the fixVersion of some clean up issues as needed.
> I'm using this issue to track this work.
> 
> Detailed Check list of planned steps:
> * S1: Generate a CSV report listing all resolved/closed Jira's with 
> 'fixVersion=master AND fixVersion=6.1'
> ** 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20status%20in%20%28Resolved%2C%20Closed%29%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20%3D%206.1%20ORDER%20BY%20resolved%20DESC%2C%20key%20DESC
> *** currently about ~100 issues
> ** The operating assumption is that any non-resolved issues should have the 
> fixVersion set correctly if/when they are resolved in the future
> * S2: Use Jira's "Bulk Edit" feature to add comments *W/O SENDING EMAIL* to 
> every issue currently assocaited with fixVersion=6.0 or fixVersio=master
> ** The comments will include unique strings based on the the specific query 
> done, and will map the list back to this issue (ex 
> {{LUCENE-7271_20160503_master}} and {{LUCENE-7271_20160503_60}})
> ** These comments will serve as a backup plan making it possible to find all 
> issues affected (by merging jira's concepts of 'master' and '6.0') after the 
> fact if need be.
> ** Queries to use for bulk edits:
> *** master: 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20fixVersion%20%3D%20master%20ORDER%20BY%20key%20DESC
> *** 6.0: 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20fixVersion%20%3D%206.0%20ORDER%20BY%20key%20DESC
> * S3: Use Jira's "Merge Versions" feature to merge "master" into "6.0"
> ** This needs to be done distinctly for both LUCENE and SOLR
> * S4: Add a new "master (7.0)" version to Jira
> ** This needs to be done distinctly for both LUCENE and SOLR
> * S5: audit every issue in the CSV file from S1 above to determine if the 
> issue should get fixVersion="master (7.0)" *added* to it
> ** if it has distinct commits to both master & branch_6x then 
> fixVersion="master (7.0)" should be added
> ** if it only ever had commits to master (ie: before branch_6x was made) then 
> no action needed
> ** if there are no obvious commits in the issue comments, then sanity check 
> why it has any fixVersion at all (can't reproduce? dup? etc...)
> * S6: Audit CHANGES.txt & git commits and *replace* fixVersion=6.0 with 
> fixVersion="master (7.0)" on the handful of issues where appropraite:
> ** Look at the 7.0 section of lucene/CHANGES.txt & solr/CHANGES.txt for new 
> features
> *** currently only 1 jira mentioned in either files in 7.0 section
> ** Use {{git co releases/lucene-solr/6.0.0 && git cherry -v master | egrep 
> '^\+'}} to find changesets on master that were not included in 6.0
> *** currently ~40 commits
> ** before removing fixVersion=6.0 from any of these issues, sanity check if 
> this is a deprecation type situation (involving diff commits in both 6.0 and 
> master) in which case fixVersion="master (7.0)" should be _added_ in addition 
> to fixVersion=6.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272994#comment-15272994
 ] 

ASF subversion and git services commented on LUCENE-7241:
-

Commit 7d4f387384686fd292e2d0da7bbb78791f4731bd in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7d4f387 ]

LUCENE-7241: More performance improvements


> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: https://issues.apache.org/jira/browse/LUCENE-7241
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: master, 6.x
>
> Attachments: LUCENE-7241.patch, LUCENE-7241.patch, LUCENE-7241.patch, 
> LUCENE-7241.patch, LUCENE-7241.patch
>
>
> This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.
> The trick here is to organize edges by some criteria, e.g. z value range, and 
> use that to avoid needing to go through all edges and/or tile large irregular 
> polygons.  Then we use the ability to quickly determine intersections to 
> figure out whether a point is within the polygon, or not.
> The current way geo3d polygons are constructed involves finding a single 
> point, or "pole", which all polygon points circle.  This point is known to be 
> either "in" or "out" based on the direction of the points.  So we have one 
> place of "truth" on the globe that is known at polygon setup time.
> If edges are organized by z value, where the z values for an edge are 
> computed by the standard way of computing bounds for a plane, then we can 
> readily organize edges into a tree structure such that it is easy to find all 
> edges we need to check for a given z value.  Then, we merely need to compute 
> how many intersections to consider as we navigate from the "truth" point to 
> the point being tested.  In practice, this means both having a tree that is 
> organized by z, and a tree organized by (x,y), since we need to navigate in 
> both directions.  But then we can cheaply count the number of intersections, 
> and once we do that, we know whether our point is "in" or "out".
> The other performance improvement we need is whether a given plane intersects 
> the polygon within provided bounds.  This can be done using the same two 
> trees (z and (x,y)), by virtue of picking which tree to use based on the 
> plane's minimum bounds in z or (x,y).  And, in practice, we might well use 
> three trees: one in x, one in y, and one in z, which would mean we didn't 
> have to compute longitudes ever.
> An implementation like this trades off the cost of finding point membership 
> in near O\(log\(n)) time vs. the extra expense per step of finding that 
> membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) 
> in the current implementation, but once again each individual step is more 
> expensive.  Therefore I would expect we'd want to use the current 
> implementation for simpler polygons and this sort of implementation for 
> tougher polygons.  Choosing which to use is a topic for another ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+116) - Build # 16664 - Still Failing!

2016-05-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16664/
Java: 64bit/jdk-9-ea+116 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:37882/solr/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:37882/solr/collection1
at 
__randomizedtesting.SeedInfo.seed([9CEC594A55BE3A47:441B9DAEFE65F81B]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:620)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:152)
at 
org.apache.solr.handler.TestReplicationHandler.index(TestReplicationHandler.java:176)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:609)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Comment Edited] (SOLR-9065) Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase

2016-05-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272968#comment-15272968
 ] 

Mark Miller edited comment on SOLR-9065 at 5/5/16 8:10 PM:
---

bq. just a "shout out" 

Follow the mailing list. We are not a *review then commit* project. The result 
that happened is how this should work. Then 9 out of 10 times we move fast, and 
once we make an adjustment to the commit.

Everyone just needs to relax. No one owns any area of the code, no one has to 
be checked with before changes. It's on anyone who cares to follow email and 
JIRA. This did not appear to be a controversial change, a patch went up, 
hossman +1'd.

This is how it's all supposed to work.


was (Author: markrmil...@gmail.com):
bq. just a "shout out" 

Follow the mailing list. We are not a commit then review project. The result 
that happened is how this should work. Then 9 out of 10 times we move fast, and 
once we make an adjustment to the commit.

Everyone just needs to relax. No one owns any area of the code, no one has to 
be checked with before changes. It's on anyone who cares to follow email and 
JIRA. This did not appear to be a controversial change, a patch went up, 
hossman +1'd.

This is how it's all supposed to work.

> Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase
> ---
>
> Key: SOLR-9065
> URL: https://issues.apache.org/jira/browse/SOLR-9065
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.1, master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9065.patch
>
>
> AbstractDistribZkTestBase sets up collections using the legacy core-based 
> system, and does a lot of comparing things against control collections that 
> the SolrJ tests really don't require.  We should migrate these tests to using 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9065) Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase

2016-05-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272968#comment-15272968
 ] 

Mark Miller edited comment on SOLR-9065 at 5/5/16 8:08 PM:
---

bq. just a "shout out" 

Follow the mailing list. We are not a commit then review project. The result 
that happened is how this should work. Then 9 out of 10 times we move fast, and 
once we make an adjustment to the commit.

Everyone just needs to relax. No one owns any area of the code, no one has to 
be checked with before changes. It's on anyone who cares to follow email and 
JIRA. This did not appear to be a controversial change, a patch went up, 
hossman +1'd.

This is how it's all supposed to work.


was (Author: markrmil...@gmail.com):
bq. just a "shout out" 

Following the mailing list. We are not a commit then review project. The result 
that happened is how this should work. Then 9 out of 10 times we move fast, and 
once we make an adjustment to the commit.

Everyone just needs to relax. No one owns any area of the code, no one has to 
be checked with before changes. It's on anyone who cares to follow email and 
JIRA. This did not appear to be a controversial change, a patch went up, 
hossman +1'd.

This is how it's all supposed to work.

> Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase
> ---
>
> Key: SOLR-9065
> URL: https://issues.apache.org/jira/browse/SOLR-9065
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.1, master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9065.patch
>
>
> AbstractDistribZkTestBase sets up collections using the legacy core-based 
> system, and does a lot of comparing things against control collections that 
> the SolrJ tests really don't require.  We should migrate these tests to using 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9065) Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase

2016-05-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272968#comment-15272968
 ] 

Mark Miller commented on SOLR-9065:
---

bq. just a "shout out" 

Following the mailing list. We are not a commit then review project. The result 
that happened is how this should work. Then 9 out of 10 times we move fast, and 
once we make an adjustment to the commit.

Everyone just needs to relax. No one owns any area of the code, no one has to 
be checked with before changes. It's on anyone who cares to follow email and 
JIRA. This did not appear to be a controversial change, a patch went up, 
hossman +1'd.

This is how it's all supposed to work.

> Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase
> ---
>
> Key: SOLR-9065
> URL: https://issues.apache.org/jira/browse/SOLR-9065
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.1, master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9065.patch
>
>
> AbstractDistribZkTestBase sets up collections using the legacy core-based 
> system, and does a lot of comparing things against control collections that 
> the SolrJ tests really don't require.  We should migrate these tests to using 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7117) AutoAddReplicas should have a cluster wide property for controlling number of cores hosted on each node

2016-05-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272960#comment-15272960
 ] 

Mark Miller commented on SOLR-7117:
---

Hey [~varunthacker], think we can commit this?

> AutoAddReplicas should have a cluster wide property for controlling number of 
> cores hosted on each node
> ---
>
> Key: SOLR-7117
> URL: https://issues.apache.org/jira/browse/SOLR-7117
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: 5.2, master
>
> Attachments: SOLR-7117.patch, SOLR-7117.patch, SOLR-7117.patch
>
>
> Currently when finding the best node to host the failed replicas, we respect 
> the maxShardsPerNode property. This is not an ideal solution as it's a per 
> collection property and we need a cluster wide property. Also using 
> maxShardsPerNode can lead to unequal distribution of replicas across nodes.
> We should just let users use the CLUSTERPROP API to set the max number of 
> cores to be hosted on each node and use that value while picking the node the 
> replica will be hosted on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8467) CloudSolrStream and FacetStream should take a SolrParams object rather than a Map<String, String> to allow more complex Solr queries to be specified

2016-05-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272955#comment-15272955
 ] 

Joel Bernstein commented on SOLR-8467:
--

So, the tests are going have to be reworked. I said I'd take this on if this 
ticket fell far behind master. At the time I didn't realize how far behind it 
would get so quickly. I should have time to review and rework the tests for 
this ticket over the next week.

> CloudSolrStream and FacetStream should take a SolrParams object rather than a 
> Map to allow more complex Solr queries to be specified
> 
>
> Key: SOLR-8467
> URL: https://issues.apache.org/jira/browse/SOLR-8467
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-8467.patch, SOLR-8467.patch, SOLR-8467.patch, 
> SOLR-8467.patch, SOLR-8647.patch, SOLR-8647.patch
>
>
> Currently, it's impossible to, say, specify multiple "fq" clauses when using 
> Streaming Aggregation due to the fact that the c'tors take a Map of params.
> Opening to discuss whether we should
> 1> deprecate the current c'tor
> and/or
> 2> add a c'tor that takes a SolrParams object instead.
> and/or
> 3> ???
> I don't see a clean way to go from a Map to a 
> (Modifiable)SolrParams, so existing code would need a significant change. I 
> hacked together a PoC, just to see if I could make CloudSolrStream take a 
> ModifiableSolrParams object instead and it passes tests, but it's so bad that 
> I'm not going to even post it. There's _got_ to be a better way to do this, 
> but at least it's possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.5-Java7 - Build # 25 - Still Failing

2016-05-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java7/25/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud

Error Message:
3 threads leaked from SUITE scope at 
org.apache.solr.handler.TestSolrConfigHandlerCloud: 1) Thread[id=14171, 
name=Thread-5460, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud]   
  at java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:355) 
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
 at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:70)
 at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108)
 at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79)   
  at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:915) 
at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2603) at 
org.apache.solr.cloud.ZkController$5.run(ZkController.java:2479)2) 
Thread[id=13761, name=Thread-5403, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:355) 
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
 at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:70)
 at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108)
 at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79)   
  at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:915) 
at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2603) at 
org.apache.solr.cloud.ZkController$5.run(ZkController.java:2479)3) 
Thread[id=14567, name=Thread-5505, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:355) 
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
 at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:70)
 at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108)
 at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79)   
  at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:915) 
at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2603) at 
org.apache.solr.cloud.ZkController$5.run(ZkController.java:2479)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE 
scope at org.apache.solr.handler.TestSolrConfigHandlerCloud: 
   1) Thread[id=14171, name=Thread-5460, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:355)
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:70)
at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108)
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79)
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:915)
at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2603)
at org.apache.solr.cloud.ZkController$5.run(ZkController.java:2479)
   2) Thread[id=13761, name=Thread-5403, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:355)
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:70)
at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108)
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79)
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:915)
at 

[jira] [Resolved] (SOLR-9065) Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase

2016-05-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9065.
--
Resolution: Implemented

> Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase
> ---
>
> Key: SOLR-9065
> URL: https://issues.apache.org/jira/browse/SOLR-9065
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.1, master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9065.patch
>
>
> AbstractDistribZkTestBase sets up collections using the legacy core-based 
> system, and does a lot of comparing things against control collections that 
> the SolrJ tests really don't require.  We should migrate these tests to using 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9065) Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase

2016-05-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272930#comment-15272930
 ] 

Joel Bernstein commented on SOLR-9065:
--

There are no hard feelings. I think the patch looks good. I've already 
re-worked what i needed to in the GraphExpressionTest. 

> Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase
> ---
>
> Key: SOLR-9065
> URL: https://issues.apache.org/jira/browse/SOLR-9065
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.1, master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9065.patch
>
>
> AbstractDistribZkTestBase sets up collections using the legacy core-based 
> system, and does a lot of comparing things against control collections that 
> the SolrJ tests really don't require.  We should migrate these tests to using 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-05-05 Thread Shikha Somani (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272929#comment-15272929
 ] 

Shikha Somani commented on SOLR-8297:
-

In Solr 4.x join queries, both "from" and "to" collections one shard was 
specified. 

Reason for this is:
In 4.x join query was performed using HttpSolrClient on a single node at a 
time. Because of using HttpSolrClient exact core name has to be specified for 
both "from" and "to" collection.

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9076) Update to Hadoop 2.7.2

2016-05-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-9076:
--
Attachment: SOLR-9076.patch

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-9076.patch, SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7271) Cleanup jira's concept of 'master' and '6.0'

2016-05-05 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272913#comment-15272913
 ] 

Hoss Man commented on LUCENE-7271:
--

I haven't seen any objections to this plan, and i haven't been able to think of 
any flaws or possible improvements.

My plan is to start working through these steps tomorrow (Friday) morning ~9AM 
my time, (~1600 UTC)

Steps S1-S4 will need to be done carefully and in a single block to reduce the 
risk of missing issues edited between steps (but obviously skimming mail for 
issues people modify during that window can be done after the fact).

Steps S5 & S6 should be done ASAP after that to reduce confusion as people 
read/edit jiras, but don't need to be rushed (ie: i'll go to lunch at some 
point) and can be divided up among multiple people if other folks want to 
volunteer.

I'll track progress with comments here, and attach the reports i generate from 
S1, S5, and S6 to this issue as i go.

I'll be on freenodes #lucene IRC channel the whole time if people have concerns 
or want to coordinate on helping out with S5 & S6.

> Cleanup jira's concept of 'master' and '6.0'
> 
>
> Key: LUCENE-7271
> URL: https://issues.apache.org/jira/browse/LUCENE-7271
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
>
> Jira's concept of "Fix Version: master" is currently screwed up, as noted & 
> discussed in this mailing list thread...
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201604.mbox/%3Calpine.DEB.2.11.1604131529140.15570@tray%3E
> The current best plan of attack (summary) is:
> * use Jira's "Merge Versions" feature to merge {{master}} into {{6.0}}
> * add a new {{master (7.0}}} to use moving forward
> * manually audit/fix the fixVersion of some clean up issues as needed.
> I'm using this issue to track this work.
> 
> Detailed Check list of planned steps:
> * S1: Generate a CSV report listing all resolved/closed Jira's with 
> 'fixVersion=master AND fixVersion=6.1'
> ** 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20status%20in%20%28Resolved%2C%20Closed%29%20AND%20fixVersion%20%3D%20master%20AND%20fixVersion%20%3D%206.1%20ORDER%20BY%20resolved%20DESC%2C%20key%20DESC
> *** currently about ~100 issues
> ** The operating assumption is that any non-resolved issues should have the 
> fixVersion set correctly if/when they are resolved in the future
> * S2: Use Jira's "Bulk Edit" feature to add comments *W/O SENDING EMAIL* to 
> every issue currently assocaited with fixVersion=6.0 or fixVersio=master
> ** The comments will include unique strings based on the the specific query 
> done, and will map the list back to this issue (ex 
> {{LUCENE-7271_20160503_master}} and {{LUCENE-7271_20160503_60}})
> ** These comments will serve as a backup plan making it possible to find all 
> issues affected (by merging jira's concepts of 'master' and '6.0') after the 
> fact if need be.
> ** Queries to use for bulk edits:
> *** master: 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20fixVersion%20%3D%20master%20ORDER%20BY%20key%20DESC
> *** 6.0: 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28LUCENE%2C%20SOLR%29%20AND%20fixVersion%20%3D%206.0%20ORDER%20BY%20key%20DESC
> * S3: Use Jira's "Merge Versions" feature to merge "master" into "6.0"
> ** This needs to be done distinctly for both LUCENE and SOLR
> * S4: Add a new "master (7.0)" version to Jira
> ** This needs to be done distinctly for both LUCENE and SOLR
> * S5: audit every issue in the CSV file from S1 above to determine if the 
> issue should get fixVersion="master (7.0)" *added* to it
> ** if it has distinct commits to both master & branch_6x then 
> fixVersion="master (7.0)" should be added
> ** if it only ever had commits to master (ie: before branch_6x was made) then 
> no action needed
> ** if there are no obvious commits in the issue comments, then sanity check 
> why it has any fixVersion at all (can't reproduce? dup? etc...)
> * S6: Audit CHANGES.txt & git commits and *replace* fixVersion=6.0 with 
> fixVersion="master (7.0)" on the handful of issues where appropraite:
> ** Look at the 7.0 section of lucene/CHANGES.txt & solr/CHANGES.txt for new 
> features
> *** currently only 1 jira mentioned in either files in 7.0 section
> ** Use {{git co releases/lucene-solr/6.0.0 && git cherry -v master | egrep 
> '^\+'}} to find changesets on master that were not included in 6.0
> *** currently ~40 commits
> ** before removing fixVersion=6.0 from any of these issues, sanity check if 
> this is a deprecation type situation (involving diff commits in both 6.0 and 
> master) in which case fixVersion="master (7.0)" should be _added_ in addition 
> to fixVersion=6.0



--
This message was 

[jira] [Commented] (SOLR-9065) Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase

2016-05-05 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272873#comment-15272873
 ] 

David Smiley commented on SOLR-9065:


I agree with Joel.  And I don't think there's any hard feelings, just a "shout 
out" as Joel said would have been great.  [~romseygeek] can you please send a 
message to the dev list to announce your refactoring plans/schedule so help us 
coordinate our efforts?  That would be very helpful -- thanks in advance.

> Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase
> ---
>
> Key: SOLR-9065
> URL: https://issues.apache.org/jira/browse/SOLR-9065
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.1, master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9065.patch
>
>
> AbstractDistribZkTestBase sets up collections using the legacy core-based 
> system, and does a lot of comparing things against control collections that 
> the SolrJ tests really don't require.  We should migrate these tests to using 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9068) Solaris SSL test failures when using NullSecureRandom?

2016-05-05 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9068:
---
Attachment: SOLR-9068.patch


bq. If this works I see not problem with the patch, because it is used during 
tests only. Right?

Correct, this is only a question of what SecureRandom source we use during 
tests (the idea being to prevent so low entropy jenkins machines from blocking 
when randomizing SSL testing)

bq. ... and for now disable the tests with assumeFalse(Constants.SUN_OS).

While this one test in particular seems to always trigger some Padding related 
problem in the SSLEngine, the underlying problem is something that *could* 
affect any SSL test (note that even with this test, the jenkins failures have 
*diff* Padding related Exceptions between master and 6x, presumably because 
some small amount of information in the Solr request/response payload is 
slightly diff between branches?) ... so if we do ultimately need to have 
special case logic when {{Constants.SUN_OS}} it shouldn't be specific to this 
test class/method, it should be part of the {{SSLTestConfig}} so we don't get 
confusing failures from any other test that might randomize SSL.

I've uploaded a new quick & dirty patch that uses a {{java.util.Random}} inside 
our {{NullSecureRandom}}.

[~thetaphi]: can you please try this new patch out?

* If this patch solves the problem I can come up with a better final fix that 
includes 2 diff "mock" SecureRandom instances and picks which one we use in 
SSLTestConfig depending on the {{Constants.SUN_OS}}.
* If this patch doesn't solve the problem then there is something more 
fundementally odd going on on Solaris (maybe our custom SecureRandomSpi is 
tickling some assumption in the JVM?) and I'll give up and just change 
SSLTestConfig to simply use the platform default SecureRandom on that OS.

bq. If you like a can give you an account on the Solaris machine to try 
yourself (keep in mind, it has neither GIT nor ANT installed, totally blank - 
all is provided by Jenkins).

No thank you -- that sounds terrible.  This is/should-be the last patch I'll 
ask you to manually try on Solaris

bq. Maybe we should open a bug report at Oracle ...

Probably, but from what i've seen you have to deal with in the past, don't have 
the time or patience to try and deal with their process.  If you want to file 
one by all means go ahead -- but you might want to wait until we figure out if 
using {{java.utilRandom}} under the covers works as a workarround, or if there 
is just some fundemental bug when using custom SecureRandom instances.




> Solaris SSL test failures when using NullSecureRandom?
> --
>
> Key: SOLR-9068
> URL: https://issues.apache.org/jira/browse/SOLR-9068
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Fix For: 4.9, master
>
> Attachments: SOLR-9068.Lucene-Solr-6.x-Solaris_110.log, 
> SOLR-9068.Lucene-Solr-master-Solaris_558.log, SOLR-9068.patch, SOLR-9068.patch
>
>
> In parent issue SOLR-5776, NullSecureRandom was introduced and SSLTestConfig 
> was refactored so that both client & server would use it to prevent blocked 
> threads waiting for entropy.
> Since those commits to master & branch_6x, both Solaris jenkins builds have 
> seen failures at the same spots in 
> TestMiniSolrCloudClusterSSL.testSslAndNoClientAuth - and looking at the logs 
> the root cause appears to be intranode communication failures due to 
> "javax.crypto.BadPaddingException"
> Perhaps the Solaris SSL impl has bugs in it's padding code that are tickeled 
> when the SecureRandom instance returns long strings of null bytes?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: solr.common.util.Pair --> commons.lang3.tuple.Pair

2016-05-05 Thread Shawn Heisey
On 5/5/2016 11:07 AM, Erick Erickson wrote:
> Or upgrade commons-lang

I did think of that, and thought it probably would not work because
commons-lang 2.x was almost guaranteed to be a sub-dependency to one of
Solr's other dependencies.

Just for giggles, I updated the ivy config to pull in 3.4 instead of
2.6.  I did "ant clean clean-jars clean-eclipse eclipse" and refreshed
the eclipse project ... I managed to figure out the correct ivy changes.

Then I used "organize imports" in Eclipse to fix the majority of the
errors - a bit of a sledgehammer approach, I admit.  There was one
source file where I had to adjust actual code, but the change was very
minor, and the javadoc suggested it wouldn't be an issue.  Then I ran
"ant clean server" and "bin\solr start -f" in the solr directory to see
if there would be any *obvious* problems where one of Solr's *other*
dependencies expected the legacy commons-lang jar.

Surprisingly, there were no immediate indications of problems.  Solr
started and the admin UI worked.  I did not try any other operations.

After a little more investigating, and seeing a ton of cloud tests
failing, I did learn that zookeeper (even 3.5 alpha versions) has an
optional dependency on commons-lang-2.4, so I tried "bin\solr -e cloud
-noprompt".  That's when it became apparent that this wasn't going to
work.  There are errors in the log about commons.lang classes not being
found.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9065) Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase

2016-05-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272830#comment-15272830
 ] 

Joel Bernstein commented on SOLR-9065:
--

But this patch had 7000 lines of code and it went up yesterday. I think that's 
too fast. 

> Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase
> ---
>
> Key: SOLR-9065
> URL: https://issues.apache.org/jira/browse/SOLR-9065
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.1, master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9065.patch
>
>
> AbstractDistribZkTestBase sets up collections using the legacy core-based 
> system, and does a lot of comparing things against control collections that 
> the SolrJ tests really don't require.  We should migrate these tests to using 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272827#comment-15272827
 ] 

ASF subversion and git services commented on LUCENE-7241:
-

Commit 2a3549a25766be556577d4ccc443e4de0358f7a8 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2a3549a ]

LUCENE-7241: Don't allocate GeoPoints we aren't going to return.


> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: https://issues.apache.org/jira/browse/LUCENE-7241
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: master, 6.x
>
> Attachments: LUCENE-7241.patch, LUCENE-7241.patch, LUCENE-7241.patch, 
> LUCENE-7241.patch, LUCENE-7241.patch
>
>
> This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.
> The trick here is to organize edges by some criteria, e.g. z value range, and 
> use that to avoid needing to go through all edges and/or tile large irregular 
> polygons.  Then we use the ability to quickly determine intersections to 
> figure out whether a point is within the polygon, or not.
> The current way geo3d polygons are constructed involves finding a single 
> point, or "pole", which all polygon points circle.  This point is known to be 
> either "in" or "out" based on the direction of the points.  So we have one 
> place of "truth" on the globe that is known at polygon setup time.
> If edges are organized by z value, where the z values for an edge are 
> computed by the standard way of computing bounds for a plane, then we can 
> readily organize edges into a tree structure such that it is easy to find all 
> edges we need to check for a given z value.  Then, we merely need to compute 
> how many intersections to consider as we navigate from the "truth" point to 
> the point being tested.  In practice, this means both having a tree that is 
> organized by z, and a tree organized by (x,y), since we need to navigate in 
> both directions.  But then we can cheaply count the number of intersections, 
> and once we do that, we know whether our point is "in" or "out".
> The other performance improvement we need is whether a given plane intersects 
> the polygon within provided bounds.  This can be done using the same two 
> trees (z and (x,y)), by virtue of picking which tree to use based on the 
> plane's minimum bounds in z or (x,y).  And, in practice, we might well use 
> three trees: one in x, one in y, and one in z, which would mean we didn't 
> have to compute longitudes ever.
> An implementation like this trades off the cost of finding point membership 
> in near O\(log\(n)) time vs. the extra expense per step of finding that 
> membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) 
> in the current implementation, but once again each individual step is more 
> expensive.  Therefore I would expect we'd want to use the current 
> implementation for simpler polygons and this sort of implementation for 
> tougher polygons.  Choosing which to use is a topic for another ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9074) solrj CloudSolrClient.directUpdate tweak

2016-05-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272826#comment-15272826
 ] 

Mark Miller commented on SOLR-9074:
---

+1

> solrj CloudSolrClient.directUpdate tweak
> 
>
> Key: SOLR-9074
> URL: https://issues.apache.org/jira/browse/SOLR-9074
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Trivial
> Attachments: SOLR-9074.patch
>
>
> Defer two NamedList allocations and initialCapacity one of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7241) Improve performance of geo3d for polygons with very large numbers of points

2016-05-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272824#comment-15272824
 ] 

ASF subversion and git services commented on LUCENE-7241:
-

Commit d4c5586032c9e24fad419958da3e848684703e61 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d4c5586 ]

LUCENE-7241: Don't allocate GeoPoints we aren't going to return.


> Improve performance of geo3d for polygons with very large numbers of points
> ---
>
> Key: LUCENE-7241
> URL: https://issues.apache.org/jira/browse/LUCENE-7241
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: master
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: master, 6.x
>
> Attachments: LUCENE-7241.patch, LUCENE-7241.patch, LUCENE-7241.patch, 
> LUCENE-7241.patch, LUCENE-7241.patch
>
>
> This ticket corresponds to LUCENE-7239, except it's for geo3d polygons.
> The trick here is to organize edges by some criteria, e.g. z value range, and 
> use that to avoid needing to go through all edges and/or tile large irregular 
> polygons.  Then we use the ability to quickly determine intersections to 
> figure out whether a point is within the polygon, or not.
> The current way geo3d polygons are constructed involves finding a single 
> point, or "pole", which all polygon points circle.  This point is known to be 
> either "in" or "out" based on the direction of the points.  So we have one 
> place of "truth" on the globe that is known at polygon setup time.
> If edges are organized by z value, where the z values for an edge are 
> computed by the standard way of computing bounds for a plane, then we can 
> readily organize edges into a tree structure such that it is easy to find all 
> edges we need to check for a given z value.  Then, we merely need to compute 
> how many intersections to consider as we navigate from the "truth" point to 
> the point being tested.  In practice, this means both having a tree that is 
> organized by z, and a tree organized by (x,y), since we need to navigate in 
> both directions.  But then we can cheaply count the number of intersections, 
> and once we do that, we know whether our point is "in" or "out".
> The other performance improvement we need is whether a given plane intersects 
> the polygon within provided bounds.  This can be done using the same two 
> trees (z and (x,y)), by virtue of picking which tree to use based on the 
> plane's minimum bounds in z or (x,y).  And, in practice, we might well use 
> three trees: one in x, one in y, and one in z, which would mean we didn't 
> have to compute longitudes ever.
> An implementation like this trades off the cost of finding point membership 
> in near O\(log\(n)) time vs. the extra expense per step of finding that 
> membership.  Setup of the query is O\(n) in this scheme, rather than O\(n^2) 
> in the current implementation, but once again each individual step is more 
> expensive.  Therefore I would expect we'd want to use the current 
> implementation for simpler polygons and this sort of implementation for 
> tougher polygons.  Choosing which to use is a topic for another ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9076) Update to Hadoop 2.7.2

2016-05-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-9076:
--
Attachment: SOLR-9076.patch

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9076) Update to Hadoop 2.7.2

2016-05-05 Thread Mark Miller (JIRA)
Mark Miller created SOLR-9076:
-

 Summary: Update to Hadoop 2.7.2
 Key: SOLR-9076
 URL: https://issues.apache.org/jira/browse/SOLR-9076
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9065) Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase

2016-05-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272797#comment-15272797
 ] 

Mark Miller edited comment on SOLR-9065 at 5/5/16 6:30 PM:
---

There was a heads up on this issue. 9 times out of 10, this would have caused 
no concern. These kind of changes are made all the time. If we had to reach out 
to each person that might give a damn and wait a week for every change, we 
would slow down a little.

It's commit then review. Alan committed, now you are reviewing. That is how we 
do it.


was (Author: markrmil...@gmail.com):
There was a heads up on this issue. 9 times out of 10, this would have caused 
no concern. These kind of changes are made all the time. If we had to reach out 
to each person that might give a damn and wait a week for every change, we 
would slow down a little.

It's commit then review. Alan committed, no you are reviewing. That is how we 
do it.

> Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase
> ---
>
> Key: SOLR-9065
> URL: https://issues.apache.org/jira/browse/SOLR-9065
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.1, master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9065.patch
>
>
> AbstractDistribZkTestBase sets up collections using the legacy core-based 
> system, and does a lot of comparing things against control collections that 
> the SolrJ tests really don't require.  We should migrate these tests to using 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9065) Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase

2016-05-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272797#comment-15272797
 ] 

Mark Miller commented on SOLR-9065:
---

There was a heads up on this issue. 9 times out of 10, this would have caused 
no concern. These kind of changes are made all the time. If we had to reach out 
to each person that might give a damn and wait a week for every change, we 
would slow down a little.

It's commit then review. Alan committed, no you are reviewing. That is how we 
do it.

> Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase
> ---
>
> Key: SOLR-9065
> URL: https://issues.apache.org/jira/browse/SOLR-9065
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.1, master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9065.patch
>
>
> AbstractDistribZkTestBase sets up collections using the legacy core-based 
> system, and does a lot of comparing things against control collections that 
> the SolrJ tests really don't require.  We should migrate these tests to using 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9065) Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase

2016-05-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272796#comment-15272796
 ] 

Joel Bernstein commented on SOLR-9065:
--

Yeah, but this was a wholesale change, committed one day after the patch went 
up. 

It's not hard to some have consideration for other peoples work. A simple heads 
up will suffice.

> Migrate solrj tests from AbstractDistribZkTestBase to SolrCloudTestCase
> ---
>
> Key: SOLR-9065
> URL: https://issues.apache.org/jira/browse/SOLR-9065
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.1, master
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9065.patch
>
>
> AbstractDistribZkTestBase sets up collections using the legacy core-based 
> system, and does a lot of comparing things against control collections that 
> the SolrJ tests really don't require.  We should migrate these tests to using 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9075) Look at using hdfs-client jar in HDFS 2.8 for smaller core dependency.

2016-05-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272786#comment-15272786
 ] 

ASF GitHub Bot commented on SOLR-9075:
--

Github user markrmiller commented on the pull request:

https://github.com/apache/lucene-solr/pull/34#issuecomment-217233476
  
I filed https://issues.apache.org/jira/browse/SOLR-9075 to look at 
shrinking the hdfs client dependency jars.


> Look at using hdfs-client jar in HDFS 2.8 for smaller core dependency.
> --
>
> Key: SOLR-9075
> URL: https://issues.apache.org/jira/browse/SOLR-9075
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >