[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 17942 - Still Failing!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17942/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 65559 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj256434149
 [ecj-lint] Compiling 988 source files to /tmp/ecj256434149
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 227)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 144)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/ClassifyStream.java
 (at line 88)
 [ecj-lint] SolrCore solrCore = (SolrCore) solrCoreObj;
 [ecj-lint]  
 [ecj-lint] Resource leak: 'solrCore' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
 (at line 1252)
 [ecj-lint] DirectoryReader reader = s==null ? null : 
s.get().getIndexReader();
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 423 - Unstable!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/423/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:41053","node_name":"127.0.0.1:41053_","state":"active","leader":"true"}];
 clusterState: 
DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/20)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:60660;,   "node_name":"127.0.0.1:60660_",  
 "state":"down"}, "core_node2":{   "state":"down",  
 "base_url":"http://127.0.0.1:50800;,   
"core":"c8n_1x3_lf_shard1_replica2",   "node_name":"127.0.0.1:50800_"}, 
"core_node3":{   "core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:41053;,   "node_name":"127.0.0.1:41053_",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:41053","node_name":"127.0.0.1:41053_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/20)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:60660;,
  "node_name":"127.0.0.1:60660_",
  "state":"down"},
"core_node2":{
  "state":"down",
  "base_url":"http://127.0.0.1:50800;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:50800_"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:41053;,
  "node_name":"127.0.0.1:41053_",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([6991E8A37862746B:E1C5D779D69E1993]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 878 - Still Failing!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/878/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
No registered leader was found after waiting for 6ms , collection: 
c8n_1x3_lf slice: shard1

Stack Trace:
org.apache.solr.common.SolrException: No registered leader was found after 
waiting for 6ms , collection: c8n_1x3_lf slice: shard1
at 
__randomizedtesting.SeedInfo.seed([2BA3F3B441EE5142:A3F7CC6EEF123CBA]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:747)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:150)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-9542) Kerberos delegation tokens requires missing Jackson library

2016-09-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537782#comment-15537782
 ] 

David Smiley commented on SOLR-9542:


I don't know how deeply Jackson is required for this capability; let's say 
hypothetically it is and it'd be hard to switch out.  If that's the case, we 
could simply mark this dependency as "optional" in the Maven POM, and we can 
add docs to the ref guide on the dependencies needed.  I suspect *very* few 
people are using Kerberos to secure Solr.  People care about security but use 
other means.

If it's not particularly hard to switch then lets do our collective users a 
favor and switch to our existing JSON parsing dependency: noggit.

> Kerberos delegation tokens requires missing Jackson library
> ---
>
> Key: SOLR-9542
> URL: https://issues.apache.org/jira/browse/SOLR-9542
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-9542.patch
>
>
> GET, RENEW or CANCEL operations for the delegation tokens support requires 
> the Solr server to have old jackson added as a dependency.
> Steps to reproduce the problem:
> 1) Configure Solr to use delegation tokens
> 2) Start Solr
> 3) Use a SolrJ application to get a delegation token.
> The server throws the following:
> {code}
> java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:279)
> at 
> org.apache.solr.security.KerberosPlugin$RequestContinuesRecorderAuthenticationHandler.managementOperation(KerberosPlugin.java:566)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514)
> at 
> org.apache.solr.security.DelegationTokenKerberosFilter.doFilter(DelegationTokenKerberosFilter.java:123)
> at 
> org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:265)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:318)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SOLR-9527) Solr RESTORE api doesn't distribute the replicas uniformly

2016-09-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537773#comment-15537773
 ] 

David Smiley commented on SOLR-9527:


FYI When I originally worked on the backup capability, I didn't add support for 
createNodeSet because I wasn't yet sure how to resolve the possibility of 
duplicating logic with regular collection creation.  I gave a cursory look at 
the patch here; it's interesting that the patch is actually kinda small.  I 
should look at it more when I have time.  I noticed this doesn't seem to be 
tested; right?  I saw a test modification but it doesn't seem to test the 
distribution.

> Solr RESTORE api doesn't distribute the replicas uniformly 
> ---
>
> Key: SOLR-9527
> URL: https://issues.apache.org/jira/browse/SOLR-9527
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.1
>Reporter: Hrishikesh Gadre
> Attachments: SOLR-9527.patch, SOLR-9527.patch, SOLR-9527.patch
>
>
> Please refer to this email thread for details,
> http://lucene.markmail.org/message/ycun4x5nx7lwj5sk?q=solr+list:org%2Eapache%2Elucene%2Esolr-user+order:date-backward=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: JSON Facet "allBuckets" behavior

2016-09-30 Thread David Smiley
Hello,
Please send your message to the solr-user list.  dev is for internal
development.

On Fri, Sep 30, 2016 at 11:09 AM Karthik Ramachandran <
kramachand...@commvault.com> wrote:

> While performing json faceting with “allBuckets” and “mincount”, I not
> sure if I am expecting a wrong result or there is bug?
>
>
>
> By “allBucket” definition the response, representing the union of all of
> the buckets.
>
>
>
> *Schema:*
>
>  docValues="true" />
>
> 
>
> 
>
>
>
> *Dataset:*
>
>   1 name="filename">filename11
>
>   2 name="filename">filename21
>
>   3 name="filename">filename31
>
>   4 name="filename">filename41
>
>   5 name="filename">filename51
>
>   6 name="filename">filename11
>
>   7 name="filename">filename21
>
>   8 name="filename">filename31
>
>   9 name="filename">filename41
>
>   10 name="filename">filename11
>
>   11 name="filename">filename21
>
>   12 name="filename">filename31
>
>   13 name="filename">filename11
>
>   14 name="filename">filename21
>
>   15 name="filename">filename11
>
>
>
> For my dataset, with request
>
> http://localhost:8983/solr/jasonfacettest/select/?q=*:*=0=
> {"sumOfDuplicates":{"type":"terms","field":"filename","mincount":2,"numBuckets":true,"allBuckets":true,"sort":"sum
> desc","facet":{"sum":"sum(size)"}}}
>
>
>
> below is the response,
>
>
> "response":{"numFound":15,"start":0,"docs":[]},"facets":{"count":15,"sumOfDuplicates":{"numBuckets":4
> ,"allBuckets":{"count":15,"sum":15.0}
> ,"buckets":[{"val":"filename1","count":5,"sum":5.0},{"val":"filename2","count":4,"sum":4.0},{"val":"filename3","count":3,"sum":3.0},{"val":"filename4","count":2,"sum":2.0}]}}}
>
>
>
> I was wonder, why the result is not this, since I have “mincount:2”
>
>
> "response":{"numFound":15,"start":0,"docs":[]},"facets":{"count":15,"sumOfDuplicates":{"numBuckets":4
> ,"allBuckets":{"count":14,"sum":14.0}
> ,"buckets":[{"val":"filename1","count":5,"sum":5.0},{"val":"filename2","count":4,"sum":4.0},{"val":"filename3","count":3,"sum":3.0},{"val":"filename4","count":2,"sum":2.0}]}}}
>
>
>
> Thanks for the help!!
>
>
>
> With Thanks & Regards
> Karthik Ramachandran
> P Please don't print this e-mail unless you really need to
>
>
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 17941 - Still Failing!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17941/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val changed' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":"X val"},  
from server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val changed' for 
path 'x' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":"X val"},  from server:  null
at 
__randomizedtesting.SeedInfo.seed([4BF1102F5FA7B2E:DCF23C550227DE8E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:535)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:249)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9585) Solr gc log filename should include port number

2016-09-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537706#comment-15537706
 ] 

David Smiley commented on SOLR-9585:


I like the idea of adding it conditionally when not 8983.  Any opinions on this 
[~janhoy]?  I know you've been involved in logging lately.

> Solr gc log filename should include port number
> ---
>
> Key: SOLR-9585
> URL: https://issues.apache.org/jira/browse/SOLR-9585
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Braun
>Priority: Minor
>
> In our setup, we have two solr applications running on the same box on 
> different ports. We are sharing the same distribution folder However, both 
> instances log garbage collection to the same file, as shown by these two 
> parts from the solr bash shell script (this is from master):
> {code}
> if [ -f "$SOLR_LOGS_DIR/solr_gc.log" ]; then
>   if $verbose ; then
> echo "Backing up $SOLR_LOGS_DIR/solr_gc.log"
>   fi
>   mv "$SOLR_LOGS_DIR/solr_gc.log" "$SOLR_LOGS_DIR/solr_gc_log_$(date 
> +"%Y%m%d_%H%M")"
> fi
> {code}
> {code}
> # if verbose gc logging enabled, setup the location of the log file
> if [ "$GC_LOG_OPTS" != "" ]; then
>   gc_log_flag="-Xloggc"
>   if [ "$JAVA_VENDOR" == "IBM J9" ]; then
> gc_log_flag="-Xverbosegclog"
>   fi
>   GC_LOG_OPTS=($GC_LOG_OPTS "$gc_log_flag:$SOLR_LOGS_DIR/solr_gc.log")
> else
>   GC_LOG_OPTS=()
> fi
> {code}
> I'm proposing appending the $SOLR_PORT value into the log file name (perhaps 
> only if non-default) so we can have both logs in our case. I'm happy to 
> provide a patch assuming this direction is desired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7438) UnifiedHighlighter

2016-09-30 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-7438:
-
Attachment: LUCENE-7438.patch

(I'm attaching the patch)
All new files; no changes to anything existing.

I plan to commit Tuesday to give even more time for review.

I'd also like to commit the patch for the benchmark module (but without the 
query files polluting the file listing?).  However I think for it to be okay, 
it needs to go further and remove the way highlighters were benchmarked before 
this, since it's too hacky/weird to see both, particularly since the existing 
mechanism has hooks into ReadTask (getBenchmarkHighlighter()). I figure the 
entire benchmark module can change at our will without back-compat concern.  

While looking at the FVH and WEH I noticed a feature in which term vecs from 
multiple fields can be used to highlight one field -- useful when you analyze 
the text in different ways into different fields (e.g. stemming vs not).  We're 
actually doing that with the UH in Bloomberg (offset source agnostic of course) 
but I didn't think to add it as a first-class feature to the UH.  Now I think 
we should in a follow-up issue.  I think that requirement is causing us to want 
things like StrictPhraseHelper to be public but it could be moved to package 
protected then, I think.

> UnifiedHighlighter
> --
>
> Key: LUCENE-7438
> URL: https://issues.apache.org/jira/browse/LUCENE-7438
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Affects Versions: 6.2
>Reporter: Timothy M. Rodriguez
>Assignee: David Smiley
> Attachments: LUCENE-7438.patch, LUCENE_7438_UH_benchmark.patch
>
>
> The UnifiedHighlighter is an evolution of the PostingsHighlighter that is 
> able to highlight using offsets in either postings, term vectors, or from 
> analysis (a TokenStream). Lucene’s existing highlighters are mostly 
> demarcated along offset source lines, whereas here it is unified -- hence 
> this proposed name. In this highlighter, the offset source strategy is 
> separated from the core highlighting functionalty. The UnifiedHighlighter 
> further improves on the PostingsHighlighter’s design by supporting accurate 
> phrase highlighting using an approach similar to the standard highlighter’s 
> WeightedSpanTermExtractor. The next major improvement is a hybrid offset 
> source strategythat utilizes postings and “light” term vectors (i.e. just the 
> terms) for highlighting multi-term queries (wildcards) without resorting to 
> analysis. Phrase highlighting and wildcard highlighting can both be disabled 
> if you’d rather highlight a little faster albeit not as accurately reflecting 
> the query.
> We’ve benchmarked an earlier version of this highlighter comparing it to the 
> other highlighters and the results were exciting! It’s tempting to share 
> those results but it’s definitely due for another benchmark, so we’ll work on 
> that. Performance was the main motivator for creating the UnifiedHighlighter, 
> as the standard Highlighter (the only one meeting Bloomberg Law’s accuracy 
> requirements) wasn’t fast enough, even with term vectors along with several 
> improvements we contributed back, and even after we forked it to highlight in 
> multiple threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5041) Add a test to make sure that a leader always recovers from log on startup

2016-09-30 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537648#comment-15537648
 ] 

Cao Manh Dat commented on SOLR-5041:


Thahnks a lot for reviewing the patch.

> Add a test to make sure that a leader always recovers from log on startup
> -
>
> Key: SOLR-5041
> URL: https://issues.apache.org/jira/browse/SOLR-5041
> Project: Solr
>  Issue Type: Test
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-5041.patch, SOLR-5401.patch
>
>
> From my comment on SOLR-4997:
> bq. I fixed a bug that I had introduced which skipped log recovery on startup 
> for all leaders instead of only sub shard leaders. I caught this only because 
> I was doing another line-by-line review of all my changes. We should have a 
> test which catches such a condition.
> Add a test which tests that leaders always recover from log on startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_102) - Build # 1842 - Failure!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1842/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrVersionReplicationTest.testCdcrDocVersions

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([A31DC63A844CC9E1:5B8BCD98762A26FD]:0)
at java.util.Arrays.copyOf(Arrays.java:3332)
at 
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
at 
java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
at java.lang.StringBuilder.append(StringBuilder.java:136)
at java.lang.StringBuilder.append(StringBuilder.java:131)
at org.apache.solr.core.Diagnostics.logThreadDumps(Diagnostics.java:47)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForCollectionToDisappear(AbstractDistribZkTestBase.java:210)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForCollectionToDisappear(BaseCdcrDistributedZkTest.java:496)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.startServers(BaseCdcrDistributedZkTest.java:596)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.createSourceCollection(BaseCdcrDistributedZkTest.java:346)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.baseBefore(BaseCdcrDistributedZkTest.java:168)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:905)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)




Build Log:
[...truncated 10952 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrVersionReplicationTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.CdcrVersionReplicationTest_A31DC63A844CC9E1-001/init-core-data-001
   [junit4]   2> 268154 INFO  
(SUITE-CdcrVersionReplicationTest-seed#[A31DC63A844CC9E1]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 268155 INFO  
(SUITE-CdcrVersionReplicationTest-seed#[A31DC63A844CC9E1]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 268157 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[A31DC63A844CC9E1]) [ 
   ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 268157 INFO  (Thread-536) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 268157 INFO  (Thread-536) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 268257 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[A31DC63A844CC9E1]) [ 
   ] o.a.s.c.ZkTestServer start zk server on port:33061
   [junit4]   2> 268265 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3575 - Failure!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3575/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 65508 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1205468261
 [ecj-lint] Compiling 988 source files to 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1205468261
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 227)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 144)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/handler/ClassifyStream.java
 (at line 88)
 [ecj-lint] SolrCore solrCore = (SolrCore) solrCoreObj;
 [ecj-lint]  
 [ecj-lint] Resource leak: 'solrCore' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
 (at line 1252)
 [ecj-lint] DirectoryReader reader = s==null ? null : 
s.get().getIndexReader();
 [ecj-lint] ^^
 [ecj-lint] Resource 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 17940 - Still Failing!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17940/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 63862 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj706235058
 [ecj-lint] Compiling 988 source files to /tmp/ecj706235058
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 227)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 144)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/ClassifyStream.java
 (at line 88)
 [ecj-lint] SolrCore solrCore = (SolrCore) solrCoreObj;
 [ecj-lint]  
 [ecj-lint] Resource leak: 'solrCore' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
 (at line 1252)
 [ecj-lint] DirectoryReader reader = s==null ? null : 
s.get().getIndexReader();
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 

[jira] [Commented] (SOLR-9579) Reuse lucene FieldType in createField flow during ingestion

2016-09-30 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537371#comment-15537371
 ] 

Yonik Seeley commented on SOLR-9579:


A couple of years ago I prototyped making SchemaField implement lucene's 
IndexableFieldType (not sure if I opened a JIRA though).  A look at that 
interface today suggests it should still be relatively easy.

> Reuse lucene FieldType in createField flow during ingestion
> ---
>
> Key: SOLR-9579
> URL: https://issues.apache.org/jira/browse/SOLR-9579
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Affects Versions: 6.x, master (7.0)
> Environment: This has been primarily tested on Windows 8 and Windows 
> Server 2012 R2
>Reporter: John Call
>Priority: Minor
>  Labels: gc, memory, reuse
> Fix For: 6.x, master (7.0)
>
> Attachments: SOLR-9579.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> During ingestion createField in FieldType is being called for each field on 
> each document. For the subclasses of FieldType without their own 
> implementation of createField the lucene version of FieldType is created to 
> be stored along with the value. However the lucene FieldType object is 
> identical when created from the same SchemaField. In testing ingestion of one 
> million rows with 22 field each we were creating 22 million lucene FieldType 
> objects when only 22 are needed. Solr should lazily initialize a lucene 
> FieldType for each SchemaField and reuse them for future ingestion. Not only 
> does this relieve memory usage but also relieves significant pressure on the 
> gc.
> There are also subclasses of Solr FieldType which create separate Lucene 
> FieldType for stored fields instead of reusing the static in StoredField.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7826) Permission issues when creating cores with bin/solr

2016-09-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537330#comment-15537330
 ] 

Hoss Man commented on SOLR-7826:


I know i'm late to the party, but FWIW: I think adding a {{\-force}} option and 
treating "root" as special still leaves a lot of room for the underlying 
problem to occur (and in general i think adding a {{\-force}} option that's 
only supported by one (sub-)command is a bad idea -- more on that below) ...

Just rejecting {{root}} won't help if {{solr}} is the effective UID of the 
process, but user {{bob}} runs {{bin/solr create}} and the new core directories 
wind up owned by {{bob}} but not readable by {{solr}}.  Likewise, running as 
{{root}} may be perfectly fine, if the original install (foolishly) installed 
as {{root}}

What really matters is that if {{bin/solr create}} is used to try and create 
new core directories, those new core directories should *really* be owned by 
whatever use owns the {{cores}} parent directory, and have the same 
{{user:group}} permissions -- because that way, regardless of what effective UI 
the solr process is running under, there's no risk that Solr will be able to 
_find_ the new core dir, but not _read_ the new core dir.

ie: 

* we don't have to do anything special to keep track of what user installed 
solr, or treat {{root}} special
* all we have to do is compare {{whoami}} to {{stat -c '%U'}} on the {{cores}} 
directory, and complain if they don't match
 


My general thoughts on {{\-force}}:

even if we switch to comparing the current user to the directory owner instead 
of treating "root" as special, a {{\-force}} option could still be supported i 
guess, but doesn't really seem necessary and in general i would say we should 
avoid it unless/until we really think through _all_ of the possible commands 
where we might want to enforce some restrictions unless {{-force}} is 
specified.  because a user who sees that there is a {{-force}} option for some 
{{bin/solr}} commands would have a reasonable expectation that they will be 
"protected" unless they specify {{-force}} on other risky solr commands as well 
(ie: deleting a core that's currently LOADed?, delete ZK nodes currently used 
by a collection? downloading files from ZK and overwriting existing files on 
disk? uploading a config set and overwritting an existing config set with the 
same name? etc...)

In general, i'm -0 to the changes made by this issue - i don't think Solr, on 
the whole, is better off with these changes, and I'd encourage the folks who 
worked on this jira to consider rolling them back and replacing them with a 
{{`whoami` == `stat -c '%U' .../cores`}} type comparison instead. 

> Permission issues when creating cores with bin/solr
> ---
>
> Key: SOLR-7826
> URL: https://issues.apache.org/jira/browse/SOLR-7826
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: newdev
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-7826.patch, SOLR-7826.patch
>
>
> Ran into an interesting situation on IRC today.
> Solr has been installed as a service using the shell script 
> install_solr_service.sh ... so it is running as an unprivileged user.
> User is running "bin/solr create" as root.  This causes permission problems, 
> because the script creates the core's instanceDir with root ownership, then 
> when Solr is instructed to actually create the core, it cannot create the 
> dataDir.
> Enhancement idea:  When the install script is used, leave breadcrumbs 
> somewhere so that the "create core" section of the main script can find it 
> and su to the user specified during install.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9579) Reuse lucene FieldType in createField flow during ingestion

2016-09-30 Thread John Call (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537281#comment-15537281
 ] 

John Call commented on SOLR-9579:
-

My reasoning for lazily creating the FieldType is that in the following three 
scenarios they are simply taking up memory with no use.
1) Systems where the implementations of SchemaField override CreateField and 
thus don't use this object. For example TrieField, PointType, and EnumField 
will not use this object at all. The main use I see is for text fields.
2) This flow is only used for ingestion path, for systems where queries are the 
main use dedicating any extra memory per field seems unnecessary.
3) For high ingestion systems with thousands of schema fields and sparse usage 
of some creating them all upfront could have slight performance impact on 
startup. Additionally, creating it lazily should still be faster than the 
current code.

In regard to benchmarking any suggestions would be appreciated, I'm not sure if 
there is any standardization on which schema or data set to use (I believe I 
have seen others discussing using the GettingStarted but I've never looked at 
how much data that contains).

I understand that the memory impact of creating the object in the constructor 
is on the order of KB for most systems so I can easily make that change if 
there is consensus around it.


> Reuse lucene FieldType in createField flow during ingestion
> ---
>
> Key: SOLR-9579
> URL: https://issues.apache.org/jira/browse/SOLR-9579
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Affects Versions: 6.x, master (7.0)
> Environment: This has been primarily tested on Windows 8 and Windows 
> Server 2012 R2
>Reporter: John Call
>Priority: Minor
>  Labels: gc, memory, reuse
> Fix For: 6.x, master (7.0)
>
> Attachments: SOLR-9579.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> During ingestion createField in FieldType is being called for each field on 
> each document. For the subclasses of FieldType without their own 
> implementation of createField the lucene version of FieldType is created to 
> be stored along with the value. However the lucene FieldType object is 
> identical when created from the same SchemaField. In testing ingestion of one 
> million rows with 22 field each we were creating 22 million lucene FieldType 
> objects when only 22 are needed. Solr should lazily initialize a lucene 
> FieldType for each SchemaField and reuse them for future ingestion. Not only 
> does this relieve memory usage but also relieves significant pressure on the 
> gc.
> There are also subclasses of Solr FieldType which create separate Lucene 
> FieldType for stored fields instead of reusing the static in StoredField.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9542) Kerberos delegation tokens requires missing Jackson library

2016-09-30 Thread Timothy M. Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537247#comment-15537247
 ] 

Timothy M. Rodriguez commented on SOLR-9542:


Not sure it makes sense to introduce a Jackson dependency here. I'm conflicted 
on how big of an issue this is though.  It's a really old version of jackson 
since it depends on the org.codehaus version.  On the other hand, it's probably 
less likely to conflict as such.

> Kerberos delegation tokens requires missing Jackson library
> ---
>
> Key: SOLR-9542
> URL: https://issues.apache.org/jira/browse/SOLR-9542
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-9542.patch
>
>
> GET, RENEW or CANCEL operations for the delegation tokens support requires 
> the Solr server to have old jackson added as a dependency.
> Steps to reproduce the problem:
> 1) Configure Solr to use delegation tokens
> 2) Start Solr
> 3) Use a SolrJ application to get a delegation token.
> The server throws the following:
> {code}
> java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:279)
> at 
> org.apache.solr.security.KerberosPlugin$RequestContinuesRecorderAuthenticationHandler.managementOperation(KerberosPlugin.java:566)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514)
> at 
> org.apache.solr.security.DelegationTokenKerberosFilter.doFilter(DelegationTokenKerberosFilter.java:123)
> at 
> org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:265)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:318)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-9475) Add install script support for CentOS

2016-09-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-9475:


NOTE: master commit was 
http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/a1bbc996



Slurping in all of the files like this seems like a very bad idea and should be 
rolled back...

{code}
+proc_version=`cat /etc/*-release 2>/dev/null`
{code}

...if for no other reason then how {{proc_version}} is used in the event of an 
OS we don't recognize...

{code}
   echo -e "\nERROR: Your Linux distribution ($proc_version) not supported by 
this script!\nYou'll need to setup
Solr as a service manually using the documentation provided in the Solr 
Reference Guide.\n" 1>&2
 {code}



In general i would suggest that the correct behavior is to test, in order...

* {{lsb_release -i}}
** required by the "Linux Standard Base Core" spec (which AFAIK almost every 
linux distro supports even if they don't get fully certified)
* {{uname -a}} 
** required by POSIX, and contains the distro name in most cases
* {{/proc/version}}
** should be on every machine running a linux kernel, but is only required to 
include the kernel version, not the distro info
* {{cat /etc/*release}}

The key element being that we should not just look to see if these *exist* in 
that order, but actually be testing these against our list of pattern strings 
("Debian", "Red Hat", etc...) in sequence before moving on the the next on the 
list.

For example: our current logic is (psuedo code)...

{code}
try { 
  proc_version = `x` 
} catch Error {
  try {
proc_version = `y`
  } catch Error {
try {
 proc_version = `z`
   } 
 }
}
for (d : known_distros) {
  if (proc_version contains d) {
return d
  }
}
{code}

Instead we should be doing...

{code}
for (cmd : { x, y, z }) {
  try {
possibility = `cmd`
for (d : known_distros) {
  if (possibility contains d) {
return d
 }
  }
}
{code}

...that way we can test more reliable options (like `uname -a`) earlier in the 
list, but we can still proceed even if those commands exist & run but don't 
return the name of the distribution on some platforms. (ala CentOS apparently)

(ie: we should not try the sketchy/risky stuff first, just because it's more 
likely we'll get a match.  we should test the stuff that's more well defined 
and well structured first, even if we know there are many systems where the 
command might exist but may not give us useful info)




> Add install script support for CentOS
> -
>
> Key: SOLR-9475
> URL: https://issues.apache.org/jira/browse/SOLR-9475
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
> Environment: Centos 7
>Reporter: Nitin Surana
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9475.patch, install_solr_service.sh
>
>
> [root@ns521582 tmp]# sudo ./install_solr_service.sh solr-6.2.0.tgz
> id: solr: no such user
> Creating new user: solr
> adduser: group '--disabled-password' does not exist
> Extracting solr-6.2.0.tgz to /opt
> Installing symlink /opt/solr -> /opt/solr-6.2.0 ...
> Installing /etc/init.d/solr script ...
> /etc/default/solr.in.sh already exist. Skipping install ...
> /var/solr/data/solr.xml already exists. Skipping install ...
> /var/solr/log4j.properties already exists. Skipping install ...
> chown: invalid spec: ‘solr:’
> ./install_solr_service.sh: line 322: update-rc.d: command not found
> id: solr: no such user
> User solr not found! Please create the solr user before running this script.
> id: solr: no such user
> User solr not found! Please create the solr user before running this script.
> Service solr installed.
> Reference - 
> http://stackoverflow.com/questions/39320647/unable-to-create-user-when-installing-solr-6-2-0-on-centos-7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_102) - Build # 486 - Still Unstable!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/486/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.spelling.SpellCheckCollatorTest.testEstimatedHitCounts

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([D55C79F758DD183F:E4E7C7C2FDE208EF]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:813)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:780)
at 
org.apache.solr.spelling.SpellCheckCollatorTest.testEstimatedHitCounts(SpellCheckCollatorTest.java:562)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//lst[@name='spellcheck']/lst[@name='collations']/lst[@name='collation']/int[@name='hits'
 and 6 <= . and . <= 10]
xml response was: 


[jira] [Commented] (SOLR-9527) Solr RESTORE api doesn't distribute the replicas uniformly

2016-09-30 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537223#comment-15537223
 ] 

Hrishikesh Gadre commented on SOLR-9527:


[~varunthacker] [~dsmiley] Can you review this patch please? 

> Solr RESTORE api doesn't distribute the replicas uniformly 
> ---
>
> Key: SOLR-9527
> URL: https://issues.apache.org/jira/browse/SOLR-9527
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.1
>Reporter: Hrishikesh Gadre
> Attachments: SOLR-9527.patch, SOLR-9527.patch, SOLR-9527.patch
>
>
> Please refer to this email thread for details,
> http://lucene.markmail.org/message/ycun4x5nx7lwj5sk?q=solr+list:org%2Eapache%2Elucene%2Esolr-user+order:date-backward=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9585) Solr gc log filename should include port number

2016-09-30 Thread Michael Braun (JIRA)
Michael Braun created SOLR-9585:
---

 Summary: Solr gc log filename should include port number
 Key: SOLR-9585
 URL: https://issues.apache.org/jira/browse/SOLR-9585
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Michael Braun
Priority: Minor


In our setup, we have two solr applications running on the same box on 
different ports. We are sharing the same distribution folder However, both 
instances log garbage collection to the same file, as shown by these two parts 
from the solr bash shell script (this is from master):

{code}
if [ -f "$SOLR_LOGS_DIR/solr_gc.log" ]; then
  if $verbose ; then
echo "Backing up $SOLR_LOGS_DIR/solr_gc.log"
  fi
  mv "$SOLR_LOGS_DIR/solr_gc.log" "$SOLR_LOGS_DIR/solr_gc_log_$(date 
+"%Y%m%d_%H%M")"
fi
{code}

{code}
# if verbose gc logging enabled, setup the location of the log file
if [ "$GC_LOG_OPTS" != "" ]; then
  gc_log_flag="-Xloggc"
  if [ "$JAVA_VENDOR" == "IBM J9" ]; then
gc_log_flag="-Xverbosegclog"
  fi
  GC_LOG_OPTS=($GC_LOG_OPTS "$gc_log_flag:$SOLR_LOGS_DIR/solr_gc.log")
else
  GC_LOG_OPTS=()
fi
{code}

I'm proposing appending the $SOLR_PORT value into the log file name (perhaps 
only if non-default) so we can have both logs in our case. I'm happy to provide 
a patch assuming this direction is desired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: SolrJ 6.2 now depends on Google-Guava and Jackson WTF?!

2016-09-30 Thread Hrishikesh Gadre
Hi Kevin,

SOLR-9542  added a jackson
dependency for solr-core and not for solrj although there is a bit of
discussion regarding solrj as well.

As I mentioned in SOLR-9542
, we can remove guava
dependency in solrj by getting rid of the annotation added for a test
method. Also jackson dependency can be replaced with noggit.

Thanks
Hrishikesh


On Fri, Sep 30, 2016 at 1:41 PM, Kevin Risden 
wrote:

> I know that the Jackson library was added here: https://issues.apache.org/
> jira/browse/SOLR-9542
>
> Kevin Risden
>
> On Fri, Sep 30, 2016 at 3:38 PM, Timothy Rodriguez (BLOOMBERG/ 120 PARK) <
> trodrigue...@bloomberg.net> wrote:
>
>> This has potential to cause conflicts for a lot of folks builds.
>> Especially since these are libraries users of the client library may have
>> imported themselves.
>>
>> From: dev@lucene.apache.org At: 09/30/16 16:30:45
>> To: dev@lucene.apache.org
>> Subject: Re: SolrJ 6.2 now depends on Google-Guava and Jackson WTF?!
>>
>> Furthermore, changes to SolrJ dependencies should be noted clearly in
>> CHANGES.txt (e.g. new SolrJ dependency XYZ for purpose ___, or updated
>> SolrJ dependency XYZ to 1.2.3).  I see no reference to this in CHANGES.txt.
>>
>> On Fri, Sep 30, 2016 at 4:24 PM David Smiley 
>> wrote:
>>
>>> I was updating a project of mine today from SolrJ 6.0.0 to 6.2.1 and ran
>>> into a classpath incompatibility problem pertaining to Guava.  I execute
>>> "mvn dependency:tree" to see what's going on and I see a huge WTF -- SolrJ
>>> depends on Guava!  Since when?!  6.2.0 apparently and in this issue --
>>> https://issues.apache.org/jira/browse/SOLR-9200Oh wow it depends on
>>> Jackson now too!
>>>
>>> Sorry, this is not okay and I feel strongly about this.  Very deliberate
>>> care should be taken to our SolrJ dependencies since they are used in many
>>> environments, and dependencies there add a burden on anyone using Solr.
>>>  **Adding SolrJ dependencies should be announced**; either in their own
>>> issue with appropriate title or noted in the dev list (not a JIRA issue) so
>>> as to be noticed.  Can we agree to do this from now on?
>>>
>>> Fortunately, it *appears* that the usage is pretty minimal?  Greg Chanan
>>> / Steve Rowe, it appears the Guava dependency is just a couple import
>>> statements for annotations.  Is that it?  I manually excluded guava from my
>>> SolrJ dependency in the pom.xml along with things like Woodstox which I
>>> always exclude.  I'm not sure yet about the scope of Jackson; we haven't
>>> needed that to date as we've got Noggit.
>>>
>>> ~ David
>>> --
>>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>>> http://www.solrenterprisesearchserver.com
>>>
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
>>
>>
>


[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 1841 - Unstable!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1841/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":null},  from 
server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val' for path 'x' 
full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":null},  from server:  null
at 
__randomizedtesting.SeedInfo.seed([4A063D93B919CDED:924B10C44EC4684D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:535)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:232)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: SolrJ 6.2 now depends on Google-Guava and Jackson WTF?!

2016-09-30 Thread Chris Hostetter

: Sorry, this is not okay and I feel strongly about this.  Very deliberate
: care should be taken to our SolrJ dependencies since they are used in many
: environments, and dependencies there add a burden on anyone using Solr.

+1

We should really be striving to make solrj completely devoid of 
dependencies.




-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2605) queryparser parses on whitespace

2016-09-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537149#comment-15537149
 ] 

Steve Rowe commented on LUCENE-2605:


{quote}
I'd like to try out the split-on-whitespace=false without coding, but by 
changing a configuration in solrconfig.xml.

Is that possible?
{quote}

Not yet - Solr's "Lucene" aka standard query parser is a fork of the classic 
Lucene query parser that was modified on this issue to enable not splitting on 
whitespace - Solr doesn't have this support yet.  There is a separate issue to 
apply the same changes to Solr here: SOLR-9185.  I'm working on it.

> queryparser parses on whitespace
> 
>
> Key: LUCENE-2605
> URL: https://issues.apache.org/jira/browse/LUCENE-2605
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Robert Muir
>Assignee: Steve Rowe
> Fix For: 6.2
>
> Attachments: LUCENE-2605-dont-split-by-default.patch, 
> LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch, 
> LUCENE-2605.patch, LUCENE-2605.patch
>
>
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> * n-gram analysis
> * shingles 
> * synonyms (especially multi-word for whitespace-separated languages)
> * languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9579) Reuse lucene FieldType in createField flow during ingestion

2016-09-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537129#comment-15537129
 ] 

Hoss Man commented on SOLR-9579:


This seems like a slam dunk to me ... there's no reason to recreate objects 
over and over from the same SchemaField object, and lucene.FieldType objects 
are deliberately small specifically so they can be resused over and over in 
long running applications

(but yes, some actual benchmark numbers would be nice)

My main question about the patch is why it only creates the lucene.FieldType 
lazily instead of doing in the SchemaField constructor?

> Reuse lucene FieldType in createField flow during ingestion
> ---
>
> Key: SOLR-9579
> URL: https://issues.apache.org/jira/browse/SOLR-9579
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Affects Versions: 6.x, master (7.0)
> Environment: This has been primarily tested on Windows 8 and Windows 
> Server 2012 R2
>Reporter: John Call
>Priority: Minor
>  Labels: gc, memory, reuse
> Fix For: 6.x, master (7.0)
>
> Attachments: SOLR-9579.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> During ingestion createField in FieldType is being called for each field on 
> each document. For the subclasses of FieldType without their own 
> implementation of createField the lucene version of FieldType is created to 
> be stored along with the value. However the lucene FieldType object is 
> identical when created from the same SchemaField. In testing ingestion of one 
> million rows with 22 field each we were creating 22 million lucene FieldType 
> objects when only 22 are needed. Solr should lazily initialize a lucene 
> FieldType for each SchemaField and reuse them for future ingestion. Not only 
> does this relieve memory usage but also relieves significant pressure on the 
> gc.
> There are also subclasses of Solr FieldType which create separate Lucene 
> FieldType for stored fields instead of reusing the static in StoredField.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: SolrJ 6.2 now depends on Google-Guava and Jackson WTF?!

2016-09-30 Thread Timothy Rodriguez (BLOOMBERG/ 120 PARK)
I'm conflicted on how big of an issue that is.  It's a really old version of 
jackson since it depends on the org.codehaus version.  On the other hand, it's 
probably less likely to conflict as such.

Guava, is definitely a much more likely potential source of conflict, though.

From: dev@lucene.apache.org At: 09/30/16 16:41:42
To: Timothy Rodriguez (BLOOMBERG/ 120 PARK), dev@lucene.apache.org
Subject: Re: SolrJ 6.2 now depends on Google-Guava and Jackson WTF?!

I know that the Jackson library was added here: 
https://issues.apache.org/jira/browse/SOLR-9542

Kevin Risden
 
On Fri, Sep 30, 2016 at 3:38 PM, Timothy Rodriguez (BLOOMBERG/ 120 PARK) 
 wrote:

This has potential to cause conflicts for a lot of folks builds.  Especially 
since these are libraries users of the client library may have imported 
themselves.

From: dev@lucene.apache.org At: 09/30/16 16:30:45
To: dev@lucene.apache.org
Subject: Re: SolrJ 6.2 now depends on Google-Guava and Jackson WTF?!

Furthermore, changes to SolrJ dependencies should be noted clearly in 
CHANGES.txt (e.g. new SolrJ dependency XYZ for purpose ___, or updated SolrJ 
dependency XYZ to 1.2.3).  I see no reference to this in CHANGES.txt.

On Fri, Sep 30, 2016 at 4:24 PM David Smiley  wrote:

I was updating a project of mine today from SolrJ 6.0.0 to 6.2.1 and ran into a 
classpath incompatibility problem pertaining to Guava.  I execute "mvn 
dependency:tree" to see what's going on and I see a huge WTF -- SolrJ depends 
on Guava!  Since when?!  6.2.0 apparently and in this issue -- 
https://issues.apache.org/jira/browse/SOLR-9200Oh wow it depends on Jackson 
now too!

Sorry, this is not okay and I feel strongly about this.  Very deliberate care 
should be taken to our SolrJ dependencies since they are used in many 
environments, and dependencies there add a burden on anyone using Solr.  
**Adding SolrJ dependencies should be announced**; either in their own issue 
with appropriate title or noted in the dev list (not a JIRA issue) so as to be 
noticed.  Can we agree to do this from now on?

Fortunately, it *appears* that the usage is pretty minimal?  Greg Chanan / 
Steve Rowe, it appears the Guava dependency is just a couple import statements 
for annotations.  Is that it?  I manually excluded guava from my SolrJ 
dependency in the pom.xml along with things like Woodstox which I always 
exclude.  I'm not sure yet about the scope of Jackson; we haven't needed that 
to date as we've got Noggit.

~ David
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
http://www.solrenterprisesearchserver.com
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
http://www.solrenterprisesearchserver.com




[jira] [Commented] (LUCENE-7438) UnifiedHighlighter

2016-09-30 Thread Timothy M. Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537117#comment-15537117
 ] 

Timothy M. Rodriguez commented on LUCENE-7438:
--

After further consideration, it seems best to leave some of the classes common 
between Postings and the Unified highlighters separate.  If we were to use the 
same classes they'd ideally move to a common sub-package that both could share 
and this would introduce unneeded change and hurt potential compatibility for 
any users of those classes.  Keeping them separate also allows for a possible 
improvement to the method highlightFieldsAsObjects which internally creates a 
Map that is promptly thrown away again in the highlight methods.  I briefly 
investigated changing this to return the internal Object[][] array and avoid 
the extra Map allocation, but this creates some awkwardness since the 
Object[][] array sorts the input fields before filling the arrays, which would 
make the API somewhat of a trap for callers.  This undesired behavior is likely 
why the map is being created.  One way to fix this is to generify 
PassageFormatter over it's output type which would allow for a 
PassageFormatter in the case of the DefaultPassageFormatter.  However, 
changing this is a rather involved change that could ultimately result in the 
UnifiedHighlighter itself having a generic type and it was not clear that 
muddying the waters with that right now was a good idea.  However, keeping 
these classes separate will allow for an attempt at that in the future.

In the meantime, I've also pushed a commit to reduce the visibility of the 
MultiTermHighlighting to package protected.  As it stands, I think this patch 
is ready.

> UnifiedHighlighter
> --
>
> Key: LUCENE-7438
> URL: https://issues.apache.org/jira/browse/LUCENE-7438
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Affects Versions: 6.2
>Reporter: Timothy M. Rodriguez
>Assignee: David Smiley
> Attachments: LUCENE_7438_UH_benchmark.patch
>
>
> The UnifiedHighlighter is an evolution of the PostingsHighlighter that is 
> able to highlight using offsets in either postings, term vectors, or from 
> analysis (a TokenStream). Lucene’s existing highlighters are mostly 
> demarcated along offset source lines, whereas here it is unified -- hence 
> this proposed name. In this highlighter, the offset source strategy is 
> separated from the core highlighting functionalty. The UnifiedHighlighter 
> further improves on the PostingsHighlighter’s design by supporting accurate 
> phrase highlighting using an approach similar to the standard highlighter’s 
> WeightedSpanTermExtractor. The next major improvement is a hybrid offset 
> source strategythat utilizes postings and “light” term vectors (i.e. just the 
> terms) for highlighting multi-term queries (wildcards) without resorting to 
> analysis. Phrase highlighting and wildcard highlighting can both be disabled 
> if you’d rather highlight a little faster albeit not as accurately reflecting 
> the query.
> We’ve benchmarked an earlier version of this highlighter comparing it to the 
> other highlighters and the results were exciting! It’s tempting to share 
> those results but it’s definitely due for another benchmark, so we’ll work on 
> that. Performance was the main motivator for creating the UnifiedHighlighter, 
> as the standard Highlighter (the only one meeting Bloomberg Law’s accuracy 
> requirements) wasn’t fast enough, even with term vectors along with several 
> improvements we contributed back, and even after we forked it to highlight in 
> multiple threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2605) queryparser parses on whitespace

2016-09-30 Thread Dion Olsthoorn (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537020#comment-15537020
 ] 

Dion Olsthoorn commented on LUCENE-2605:


Thanks for quick response, Steve. I overlooked your last patch was for master 
only, not for 6.x.
I'd like to try out the split-on-whitespace=false without coding, but by 
changing a configuration in solrconfig.xml.

Is that possible?

> queryparser parses on whitespace
> 
>
> Key: LUCENE-2605
> URL: https://issues.apache.org/jira/browse/LUCENE-2605
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Robert Muir
>Assignee: Steve Rowe
> Fix For: 6.2
>
> Attachments: LUCENE-2605-dont-split-by-default.patch, 
> LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch, 
> LUCENE-2605.patch, LUCENE-2605.patch
>
>
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> * n-gram analysis
> * shingles 
> * synonyms (especially multi-word for whitespace-separated languages)
> * languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: SolrJ 6.2 now depends on Google-Guava and Jackson WTF?!

2016-09-30 Thread Kevin Risden
I know that the Jackson library was added here:
https://issues.apache.org/jira/browse/SOLR-9542

Kevin Risden

On Fri, Sep 30, 2016 at 3:38 PM, Timothy Rodriguez (BLOOMBERG/ 120 PARK) <
trodrigue...@bloomberg.net> wrote:

> This has potential to cause conflicts for a lot of folks builds.
> Especially since these are libraries users of the client library may have
> imported themselves.
>
> From: dev@lucene.apache.org At: 09/30/16 16:30:45
> To: dev@lucene.apache.org
> Subject: Re: SolrJ 6.2 now depends on Google-Guava and Jackson WTF?!
>
> Furthermore, changes to SolrJ dependencies should be noted clearly in
> CHANGES.txt (e.g. new SolrJ dependency XYZ for purpose ___, or updated
> SolrJ dependency XYZ to 1.2.3).  I see no reference to this in CHANGES.txt.
>
> On Fri, Sep 30, 2016 at 4:24 PM David Smiley 
> wrote:
>
>> I was updating a project of mine today from SolrJ 6.0.0 to 6.2.1 and ran
>> into a classpath incompatibility problem pertaining to Guava.  I execute
>> "mvn dependency:tree" to see what's going on and I see a huge WTF -- SolrJ
>> depends on Guava!  Since when?!  6.2.0 apparently and in this issue --
>> https://issues.apache.org/jira/browse/SOLR-9200Oh wow it depends on
>> Jackson now too!
>>
>> Sorry, this is not okay and I feel strongly about this.  Very deliberate
>> care should be taken to our SolrJ dependencies since they are used in many
>> environments, and dependencies there add a burden on anyone using Solr.
>>  **Adding SolrJ dependencies should be announced**; either in their own
>> issue with appropriate title or noted in the dev list (not a JIRA issue) so
>> as to be noticed.  Can we agree to do this from now on?
>>
>> Fortunately, it *appears* that the usage is pretty minimal?  Greg Chanan
>> / Steve Rowe, it appears the Guava dependency is just a couple import
>> statements for annotations.  Is that it?  I manually excluded guava from my
>> SolrJ dependency in the pom.xml along with things like Woodstox which I
>> always exclude.  I'm not sure yet about the scope of Jackson; we haven't
>> needed that to date as we've got Noggit.
>>
>> ~ David
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www.
>> solrenterprisesearchserver.com
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www.
> solrenterprisesearchserver.com
>
>
>


[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 17939 - Failure!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17939/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 63855 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj1841187814
 [ecj-lint] Compiling 988 source files to /tmp/ecj1841187814
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 216)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 227)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 144)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/ClassifyStream.java
 (at line 88)
 [ecj-lint] SolrCore solrCore = (SolrCore) solrCoreObj;
 [ecj-lint]  
 [ecj-lint] Resource leak: 'solrCore' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
 (at line 1252)
 [ecj-lint] DirectoryReader reader = s==null ? null : 
s.get().getIndexReader();
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 

Re: SolrJ 6.2 now depends on Google-Guava and Jackson WTF?!

2016-09-30 Thread Timothy Rodriguez (BLOOMBERG/ 120 PARK)
This has potential to cause conflicts for a lot of folks builds.  Especially 
since these are libraries users of the client library may have imported 
themselves.

From: dev@lucene.apache.org At: 09/30/16 16:30:45
To: dev@lucene.apache.org
Subject: Re: SolrJ 6.2 now depends on Google-Guava and Jackson WTF?!

Furthermore, changes to SolrJ dependencies should be noted clearly in 
CHANGES.txt (e.g. new SolrJ dependency XYZ for purpose ___, or updated SolrJ 
dependency XYZ to 1.2.3).  I see no reference to this in CHANGES.txt.

On Fri, Sep 30, 2016 at 4:24 PM David Smiley  wrote:

I was updating a project of mine today from SolrJ 6.0.0 to 6.2.1 and ran into a 
classpath incompatibility problem pertaining to Guava.  I execute "mvn 
dependency:tree" to see what's going on and I see a huge WTF -- SolrJ depends 
on Guava!  Since when?!  6.2.0 apparently and in this issue -- 
https://issues.apache.org/jira/browse/SOLR-9200Oh wow it depends on Jackson 
now too!

Sorry, this is not okay and I feel strongly about this.  Very deliberate care 
should be taken to our SolrJ dependencies since they are used in many 
environments, and dependencies there add a burden on anyone using Solr.  
**Adding SolrJ dependencies should be announced**; either in their own issue 
with appropriate title or noted in the dev list (not a JIRA issue) so as to be 
noticed.  Can we agree to do this from now on?

Fortunately, it *appears* that the usage is pretty minimal?  Greg Chanan / 
Steve Rowe, it appears the Guava dependency is just a couple import statements 
for annotations.  Is that it?  I manually excluded guava from my SolrJ 
dependency in the pom.xml along with things like Woodstox which I always 
exclude.  I'm not sure yet about the scope of Jackson; we haven't needed that 
to date as we've got Noggit.

~ David
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
http://www.solrenterprisesearchserver.com
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
http://www.solrenterprisesearchserver.com



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6149 - Failure!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6149/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
[index.20161001025002282, index.20161001025003446, index.properties, 
replication.properties, snapshot_metadata] expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: [index.20161001025002282, index.20161001025003446, 
index.properties, replication.properties, snapshot_metadata] expected:<1> but 
was:<2>
at 
__randomizedtesting.SeedInfo.seed([857D36AA1A19AB40:5ED6366C1F31C2F3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:907)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:874)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-9577) SolrConfig edit operations should not need to reload core

2016-09-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537000#comment-15537000
 ] 

Erick Erickson commented on SOLR-9577:
--

I have to ask, "Is this worth the effort"? I'd expect people to be changing the 
config mostly at the beginning of a project, especially during development with 
a "long tail" during maintenance windows.

Is the complexity/risk-of-getting-it-wrong really worth the benefit?

It certainly may be, but just askin'..

Probably a distinct question from how well-behaved components should, well, 
behave

> SolrConfig edit operations should not need to reload core
> -
>
> Key: SOLR-9577
> URL: https://issues.apache.org/jira/browse/SOLR-9577
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> Every single change done to solrconfig.xml/configoverlay.json leads to a core 
> reload. This is very bad for performance. 
> Ideally , If I update/add/delete a component only that one component needs to 
> get reloaded.
> How to do this?
> Every component in  Solr should be able to implement an interface 
> {code:java}
> interface Reloadable {
> /** When the configuration of this component is changed the core invokes this 
> method, with the new configuration
> */
> void reload(PluginInfo info);
> /** After a reload() is called on any component in that core , this is invoked
> */
> default void postConfigChange(SolrCore core){}
> }
> {code}
> if the component implements this interface, any change to its configuration 
> will result in a callback to this method.
> if the component does not implement this interface, we should unload and the 
> component and call any close hooks registered from the inform() method . To 
> make this work, we will have to disable registering close hooks from anywhere 
> else. After unloading the component, a new one created with  the new 
> configuration 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: SolrJ 6.2 now depends on Google-Guava and Jackson WTF?!

2016-09-30 Thread David Smiley
Furthermore, changes to SolrJ dependencies should be noted clearly in
CHANGES.txt (e.g. new SolrJ dependency XYZ for purpose ___, or updated
SolrJ dependency XYZ to 1.2.3).  I see no reference to this in CHANGES.txt.

On Fri, Sep 30, 2016 at 4:24 PM David Smiley 
wrote:

> I was updating a project of mine today from SolrJ 6.0.0 to 6.2.1 and ran
> into a classpath incompatibility problem pertaining to Guava.  I execute
> "mvn dependency:tree" to see what's going on and I see a huge WTF -- SolrJ
> depends on Guava!  Since when?!  6.2.0 apparently and in this issue --
> https://issues.apache.org/jira/browse/SOLR-9200Oh wow it depends on
> Jackson now too!
>
> Sorry, this is not okay and I feel strongly about this.  Very deliberate
> care should be taken to our SolrJ dependencies since they are used in many
> environments, and dependencies there add a burden on anyone using Solr.
>  **Adding SolrJ dependencies should be announced**; either in their own
> issue with appropriate title or noted in the dev list (not a JIRA issue) so
> as to be noticed.  Can we agree to do this from now on?
>
> Fortunately, it *appears* that the usage is pretty minimal?  Greg Chanan /
> Steve Rowe, it appears the Guava dependency is just a couple import
> statements for annotations.  Is that it?  I manually excluded guava from my
> SolrJ dependency in the pom.xml along with things like Woodstox which I
> always exclude.  I'm not sure yet about the scope of Jackson; we haven't
> needed that to date as we've got Noggit.
>
> ~ David
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


SolrJ 6.2 now depends on Google-Guava and Jackson WTF?!

2016-09-30 Thread David Smiley
I was updating a project of mine today from SolrJ 6.0.0 to 6.2.1 and ran
into a classpath incompatibility problem pertaining to Guava.  I execute
"mvn dependency:tree" to see what's going on and I see a huge WTF -- SolrJ
depends on Guava!  Since when?!  6.2.0 apparently and in this issue --
https://issues.apache.org/jira/browse/SOLR-9200Oh wow it depends on
Jackson now too!

Sorry, this is not okay and I feel strongly about this.  Very deliberate
care should be taken to our SolrJ dependencies since they are used in many
environments, and dependencies there add a burden on anyone using Solr.
 **Adding SolrJ dependencies should be announced**; either in their own
issue with appropriate title or noted in the dev list (not a JIRA issue) so
as to be noticed.  Can we agree to do this from now on?

Fortunately, it *appears* that the usage is pretty minimal?  Greg Chanan /
Steve Rowe, it appears the Guava dependency is just a couple import
statements for annotations.  Is that it?  I manually excluded guava from my
SolrJ dependency in the pom.xml along with things like Woodstox which I
always exclude.  I'm not sure yet about the scope of Jackson; we haven't
needed that to date as we've got Noggit.

~ David
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Updated] (SOLR-9554) Multiple ManagedIndexSchemaFactory upgrades running simultaneously can clash, causing cores not to load

2016-09-30 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9554:
---
Attachment: SOLR-9554-test.patch

[^SOLR-9554-test.patch] makes test simple as possible. There is no refactoring 
for {{ManagedIndexSchemaFactory}} yet. TBC. 

The question is: {{SuspendingZkClient}} reminds the best testing practices. Can 
it be generalized and used somewhere else? 

> Multiple ManagedIndexSchemaFactory upgrades running simultaneously can clash, 
> causing cores not to load
> ---
>
> Key: SOLR-9554
> URL: https://issues.apache.org/jira/browse/SOLR-9554
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9554-just-fix.patch, SOLR-9554-test.patch, 
> SOLR-9554.patch, SOLR-9554.patch, SOLR-9554.patch, SOLR-9554.patch, 
> SOLR-9554.patch
>
>
> If a collection is created using a configset with a ManagedSchemaFactory but 
> no managed-schema file, then multiple cores may try and convert the schema 
> file simultaneously.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7472) MultiFieldQueryParser.getFieldQuery() drops queries that are neither BooleanQuery nor TermQuery

2016-09-30 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7472:
---
Attachment: LUCENE-7472.patch

Patch with a fix that treats all non-BooleanQuery queries opaquely (like 
TermQuery), and adds a test for the SynonymQuery case that fails without the 
patch and succeeds with it.


> MultiFieldQueryParser.getFieldQuery() drops queries that are neither 
> BooleanQuery nor TermQuery 
> 
>
> Key: LUCENE-7472
> URL: https://issues.apache.org/jira/browse/LUCENE-7472
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7472.patch
>
>
> From 
> [http://mail-archives.apache.org/mod_mbox/lucene-java-user/201609.mbox/%3c944985a6ac27425681bd27abe9d90...@ska-wn-e132.ptvag.ptv.de%3e],
>  Oliver Kaleske reports:
> {quote}
> Hi,
> in updating Lucene from 6.1.0 to 6.2.0 I came across the following:
> We have a subclass of MultiFieldQueryParser (MFQP) for creating a custom type 
> of Query, which calls getFieldQuery() on its base class (MFQP).
> For each of its search fields, this method has a Query created by calling 
> getFieldQuery() on QueryParserBase.
> Ultimately, we wind up in QueryBuilder's createFieldQuery() method, which 
> depending on the number of tokens (etc.) decides what type of Query to 
> return: a TermQuery, BooleanQuery, PhraseQuery, or MultiPhraseQuery.
> Back in MFQP.getFieldQuery(), a variable maxTerms is determined depending on 
> the type of Query returned: for a TermQuery or a BooleanQuery, its value will 
> in general be nonzero, clauses are created, and a non-null Query is returned.
> However, other Query subclasses result in maxTerms=0, an empty list of 
> clauses, and finally null is returned.
> To me, this seems like a bug, but I might as well be missing something. The 
> comment "// happens for stopwords" on the return null statement, however, 
> seems to suggest that Query types other than TermQuery and BooleanQuery were 
> not considered properly here.
> I should point out that our custom MFQP subclass so far does some rather 
> unsophisticated tokenization before calling getFieldQuery() on each token, so 
> characters like '*' may still slip through. So perhaps with proper 
> tokenization, it is guaranteed that only TermQuery and BooleanQuery can come 
> out of the chain of getFieldQuery() calls, and not handling 
> (Multi)PhraseQuery in MFQP.getFieldQuery() can never cause trouble?
> The code in MFQP.getFieldQuery dates back to
> LUCENE-2605: Add classic QueryParser option setSplitOnWhitespace() to control 
> whether to split on whitespace prior to text analysis.  Default behavior 
> remains unchanged: split-on-whitespace=true.
> (06 Jul 2016), when it was substantially expanded.
> Best regards,
> Oliver
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7472) MultiFieldQueryParser.getFieldQuery() drops queries that are neither BooleanQuery nor TermQuery

2016-09-30 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-7472:
--

 Summary: MultiFieldQueryParser.getFieldQuery() drops queries that 
are neither BooleanQuery nor TermQuery 
 Key: LUCENE-7472
 URL: https://issues.apache.org/jira/browse/LUCENE-7472
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Steve Rowe


>From 
>[http://mail-archives.apache.org/mod_mbox/lucene-java-user/201609.mbox/%3c944985a6ac27425681bd27abe9d90...@ska-wn-e132.ptvag.ptv.de%3e],
> Oliver Kaleske reports:

{quote}
Hi,

in updating Lucene from 6.1.0 to 6.2.0 I came across the following:

We have a subclass of MultiFieldQueryParser (MFQP) for creating a custom type 
of Query, which calls getFieldQuery() on its base class (MFQP).
For each of its search fields, this method has a Query created by calling 
getFieldQuery() on QueryParserBase.
Ultimately, we wind up in QueryBuilder's createFieldQuery() method, which 
depending on the number of tokens (etc.) decides what type of Query to return: 
a TermQuery, BooleanQuery, PhraseQuery, or MultiPhraseQuery.

Back in MFQP.getFieldQuery(), a variable maxTerms is determined depending on 
the type of Query returned: for a TermQuery or a BooleanQuery, its value will 
in general be nonzero, clauses are created, and a non-null Query is returned.
However, other Query subclasses result in maxTerms=0, an empty list of clauses, 
and finally null is returned.

To me, this seems like a bug, but I might as well be missing something. The 
comment "// happens for stopwords" on the return null statement, however, seems 
to suggest that Query types other than TermQuery and BooleanQuery were not 
considered properly here.
I should point out that our custom MFQP subclass so far does some rather 
unsophisticated tokenization before calling getFieldQuery() on each token, so 
characters like '*' may still slip through. So perhaps with proper 
tokenization, it is guaranteed that only TermQuery and BooleanQuery can come 
out of the chain of getFieldQuery() calls, and not handling (Multi)PhraseQuery 
in MFQP.getFieldQuery() can never cause trouble?

The code in MFQP.getFieldQuery dates back to
LUCENE-2605: Add classic QueryParser option setSplitOnWhitespace() to control 
whether to split on whitespace prior to text analysis.  Default behavior 
remains unchanged: split-on-whitespace=true.
(06 Jul 2016), when it was substantially expanded.

Best regards,
Oliver
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2605) queryparser parses on whitespace

2016-09-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536784#comment-15536784
 ] 

Steve Rowe commented on LUCENE-2605:


Hi [~dionoid]

bq. Reading the discussion above, it seems that splitting on whitespace is 
disabled by default now.

No, that's wrong - from the commit messages (which you can see in comments on 
this issue above):

bq. Default behavior remains unchanged: split-on-whitespace=true

In Lucene 7.0, though, the default before will switch to 
split-on-whitespace=false

{quote}
How can I test if this works?
Any example configuration would be very helpful!
{quote}

Have you tried setting 
[{{setSplitOnWhitespace(false)}}|http://lucene.apache.org/core/6_2_1/queryparser/org/apache/lucene/queryparser/classic/QueryParser.html#setSplitOnWhitespace-boolean-]
 on the query parser before using it?


> queryparser parses on whitespace
> 
>
> Key: LUCENE-2605
> URL: https://issues.apache.org/jira/browse/LUCENE-2605
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Robert Muir
>Assignee: Steve Rowe
> Fix For: 6.2
>
> Attachments: LUCENE-2605-dont-split-by-default.patch, 
> LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch, 
> LUCENE-2605.patch, LUCENE-2605.patch
>
>
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> * n-gram analysis
> * shingles 
> * synonyms (especially multi-word for whitespace-separated languages)
> * languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9493) uniqueKey generation fails if content POSTed as "application/javabin" and uniqueKey field comes as NULL (as opposed to not coming at all).

2016-09-30 Thread Yury Kartsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Kartsev closed SOLR-9493.
--
Resolution: Not A Problem

Closing as "Not A Problem". Automatic generation of the UUID field happens only 
if that field was not sent to SOLR. The ticket was created with assumption that 
generation should happen for fields sent with 'null' values as well. This is 
not the case by design.
Although if generation is needed for null-valued fields, there is a workaround 
for that: 
[IgnoreFieldUpdateProcessorFactory|http://www.solr-start.com/javadoc/solr-lucene/org/apache/solr/update/processor/IgnoreFieldUpdateProcessorFactory.html]
 that can be used for that (see above).

Thank you everyone for your help!

> uniqueKey generation fails if content POSTed as "application/javabin" and 
> uniqueKey field comes as NULL (as opposed to not coming at all).
> --
>
> Key: SOLR-9493
> URL: https://issues.apache.org/jira/browse/SOLR-9493
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yury Kartsev
> Attachments: 200.png, 400.png, Screen Shot 2016-09-11 at 16.29.50 
> .png, SolrInputDoc_contents.png, SolrInputDoc_headers.png
>
>
> I have faced a weird issue when the same application code (using SolrJ) fails 
> indexing a document without a unique key (should be auto-generated by SOLR) 
> in SolrCloud and succeeds indexing it in standalone SOLR instance (or even in 
> cloud mode, but from web interface of one of the replicas). Difference is 
> obviously only between clients (CloudSolrClient vs HttpSolrClient) and SOLR 
> URLs (Zokeeper hostname+port vs standalone SOLR instance hostname and port). 
> Failure is seen as "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id".
> I am using SOLR 5.1. In cloud mode I have 1 shard and 3 replicas.
> After lot of debugging and investigation (see below as well as my 
> [StackOverflow 
> post|http://stackoverflow.com/questions/39401792/uniquekey-generation-does-not-work-in-solrcloud-but-works-if-standalone])
>  I came to a conclusion that the difference in failing and succeeding calls 
> is simply content type of the POSTing requests. Local proxy clearly shows 
> that the request fails if content is sent as "application/javabin" (see 
> attached screenshot with sensitive data removed) and succeeds if content sent 
> as "application/xml; charset=UTF-8"  (see attached screenshot with sensitive 
> data removed).
> Would you be able to please assist?
> Thank you very much in advance!
> 
> Copying whole description and investigation here as well:
> 
> [Documentation|https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements]
>  states:{quote}Schema defaults and copyFields cannot be used to populate the 
> uniqueKey field. You can use UUIDUpdateProcessorFactory to have uniqueKey 
> values generated automatically.{quote}
> Therefore I have added my uniqueKey field to the schema:{code} name="uuid" class="solr.UUIDField" indexed="true" />
> ...
> 
> ...
> id{code}Then I have added updateRequestProcessorChain 
> to my solrconfig:{code}
> 
> id
> 
> 
> {code}And made it the default for the 
> UpdateRequestHandler:{code}
>  
>   uuid
>  
> {code}
> Adding new documents with null/absent id works fine as from web-interface of 
> one of the replicas, as when using SOLR in standalone mode (non-cloud) from 
> my application. Although when only I'm using SolrCloud and add document from 
> my application (using CloudSolrClient from SolrJ) it fails with 
> "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id"
> All other operations like ping or search for documents work fine in either 
> mode (standalone or cloud).
> INVESTIGATION (i.e. more details):
> In standalone mode obviously update request is:{code}POST 
> standalone_host:port/solr/collection_name/update?wt=json{code}
> In SOLR cloud mode, when adding document from one replica's web interface, 
> update request is (found through inspecting the call made by web interface): 
> {code}POST 
> replica_host:port/solr/collection_name_shard1_replica_1/update?wt=json{code}
> In both these cases payload is something like:{code}{
> "add": {
> "doc": {
>  .
> },
> "boost": 1.0,
> "overwrite": true,
> "commitWithin": 1000
> }
> }{code}
> In case when CloudSolrClient is used, the following happens (found 

[jira] [Commented] (LUCENE-2605) queryparser parses on whitespace

2016-09-30 Thread Dion Olsthoorn (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536668#comment-15536668
 ] 

Dion Olsthoorn commented on LUCENE-2605:


Hi, I just installed Solr 6.2.1 to see if this patch fixes the query-time 
analyzer filter for multi-word synonyms. 
But unfortunately I can see no difference: still it seems that the queryparser 
sends each whitespace separated term. 
Reading the discussion above, it seems that splitting on whitespace is disabled 
by default now.

How can I test if this works?
Any example configuration would be very helpful!

> queryparser parses on whitespace
> 
>
> Key: LUCENE-2605
> URL: https://issues.apache.org/jira/browse/LUCENE-2605
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Robert Muir
>Assignee: Steve Rowe
> Fix For: 6.2
>
> Attachments: LUCENE-2605-dont-split-by-default.patch, 
> LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch, 
> LUCENE-2605.patch, LUCENE-2605.patch
>
>
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> * n-gram analysis
> * shingles 
> * synonyms (especially multi-word for whitespace-separated languages)
> * languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6677) Reduce logging during startup and shutdown

2016-09-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536622#comment-15536622
 ] 

Hoss Man commented on SOLR-6677:


[~romseygeek]: please make sure to add a note to the "Upgrading" section of 
CHANGES for 6.3 to call attention to the fact that Solr's logging verbosity at 
the INFO level has been greatly reduced, and people may need to update the log 
configs to use the DEBUG level to get the same logging messages as before.

> Reduce logging during startup and shutdown
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>  Components: logging
>Reporter: Noble Paul
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-6677-part-2.patch, SOLR-6677-part-4.patch, 
> SOLR-6677-part3.patch, SOLR-6677.patch, SOLR-6677.patch
>
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9583) When the same exists across multiple collections that are searched with an alias, the document returned in the results list is indeterminate

2016-09-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536578#comment-15536578
 ] 

David Smiley commented on SOLR-9583:


Failing with a helpful message would be great.  "Distributed-search requires no 
duplicated uniqueKeys but we found some: "

> When the same  exists across multiple collections that are 
> searched with an alias, the document returned in the results list is 
> indeterminate
> 
>
> Key: SOLR-9583
> URL: https://issues.apache.org/jira/browse/SOLR-9583
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>
> Not quite sure whether to call this a bug or improvement...
> Currently if I have two collections C1 and C2 and an alias that points to 
> both _and_ I have a document in both collections with the _same_ , 
> the returned list  sometimes has the doc from C1 and sometimes from C2.
> If I add shards.info=true I see the document found in each collection, but 
> only one in the document list. Which one changes if I re-submit the identical 
> query.
> This seems incorrect, perhaps a side effect of piggy-backing the collection 
> aliasing on searching multiple shards? (Thanks Shalin for that bit of 
> background).
> I can see both use-cases: 
> 1>  aliasing multiple collections validly assumes that s should be 
> unique across them all and only one doc should be returned. Even in this case 
> which doc should be returned should be deterministic.
> 2> these are arbitrary collections without any a-priori relationship and 
> identical s do NOT identify the "same" document so both should be 
> returned.
> So I propose we do two things:
> a> provide a param for the CREATEALIAS command that controls whether docs 
> with the same  from different collections should both be returned. 
> If they both should, there's still the question of in what order.
> b> provide a deterministic way dups from different collections are resolved. 
> What that algorithm is I'm not quite sure. The order the collections were 
> specified in the CREATEALIAS command? Some field in the documents? Other??? 
> What happens if this option is not specified on the CREATEALIAS command?
> Implicit in the above is my assumption that it's perfectly valid to have 
> different aliases in the same cluster behave differently if specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9493) uniqueKey generation fails if content POSTed as "application/javabin" and uniqueKey field comes as NULL (as opposed to not coming at all).

2016-09-30 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536574#comment-15536574
 ] 

Alexandre Rafalovitch commented on SOLR-9493:
-

Can this case be closed now? It is not a bug and there is no next action on it.

> uniqueKey generation fails if content POSTed as "application/javabin" and 
> uniqueKey field comes as NULL (as opposed to not coming at all).
> --
>
> Key: SOLR-9493
> URL: https://issues.apache.org/jira/browse/SOLR-9493
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yury Kartsev
> Attachments: 200.png, 400.png, Screen Shot 2016-09-11 at 16.29.50 
> .png, SolrInputDoc_contents.png, SolrInputDoc_headers.png
>
>
> I have faced a weird issue when the same application code (using SolrJ) fails 
> indexing a document without a unique key (should be auto-generated by SOLR) 
> in SolrCloud and succeeds indexing it in standalone SOLR instance (or even in 
> cloud mode, but from web interface of one of the replicas). Difference is 
> obviously only between clients (CloudSolrClient vs HttpSolrClient) and SOLR 
> URLs (Zokeeper hostname+port vs standalone SOLR instance hostname and port). 
> Failure is seen as "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id".
> I am using SOLR 5.1. In cloud mode I have 1 shard and 3 replicas.
> After lot of debugging and investigation (see below as well as my 
> [StackOverflow 
> post|http://stackoverflow.com/questions/39401792/uniquekey-generation-does-not-work-in-solrcloud-but-works-if-standalone])
>  I came to a conclusion that the difference in failing and succeeding calls 
> is simply content type of the POSTing requests. Local proxy clearly shows 
> that the request fails if content is sent as "application/javabin" (see 
> attached screenshot with sensitive data removed) and succeeds if content sent 
> as "application/xml; charset=UTF-8"  (see attached screenshot with sensitive 
> data removed).
> Would you be able to please assist?
> Thank you very much in advance!
> 
> Copying whole description and investigation here as well:
> 
> [Documentation|https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements]
>  states:{quote}Schema defaults and copyFields cannot be used to populate the 
> uniqueKey field. You can use UUIDUpdateProcessorFactory to have uniqueKey 
> values generated automatically.{quote}
> Therefore I have added my uniqueKey field to the schema:{code} name="uuid" class="solr.UUIDField" indexed="true" />
> ...
> 
> ...
> id{code}Then I have added updateRequestProcessorChain 
> to my solrconfig:{code}
> 
> id
> 
> 
> {code}And made it the default for the 
> UpdateRequestHandler:{code}
>  
>   uuid
>  
> {code}
> Adding new documents with null/absent id works fine as from web-interface of 
> one of the replicas, as when using SOLR in standalone mode (non-cloud) from 
> my application. Although when only I'm using SolrCloud and add document from 
> my application (using CloudSolrClient from SolrJ) it fails with 
> "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id"
> All other operations like ping or search for documents work fine in either 
> mode (standalone or cloud).
> INVESTIGATION (i.e. more details):
> In standalone mode obviously update request is:{code}POST 
> standalone_host:port/solr/collection_name/update?wt=json{code}
> In SOLR cloud mode, when adding document from one replica's web interface, 
> update request is (found through inspecting the call made by web interface): 
> {code}POST 
> replica_host:port/solr/collection_name_shard1_replica_1/update?wt=json{code}
> In both these cases payload is something like:{code}{
> "add": {
> "doc": {
>  .
> },
> "boost": 1.0,
> "overwrite": true,
> "commitWithin": 1000
> }
> }{code}
> In case when CloudSolrClient is used, the following happens (found through 
> debugging):
> Using ZK and some logic, URL list of replicas is constructed that looks like 
> this:{code}[http://replica_1_host:port/solr/collection_name/,
>  http://replica_2_host:port/solr/collection_name/,
>  http://replica_3_host:port/solr/collection_name/]{code}
> This code is called:{code}LBHttpSolrClient.Req req = new 
> LBHttpSolrClient.Req(request, theUrlList);
> LBHttpSolrClient.Rsp rsp = lbClient.request(req);
> return rsp.getResponse();{code}
> Where 

Re: [DISCUSS] JIRA maintenance and response times

2016-09-30 Thread Alexandre Rafalovitch
I would love some new guidelines.

One issue I don't see mentioned is a consistent way to tag the issues.
We have tags, components, other things? And they are a bit all over
the place.

E.g. I don't know of an easy way to find all Admin UI issues. And I am
not sure what to tag it even for the issues I did find through other
means.

Regards,
   Alex.

Newsletter and resources for Solr beginners and intermediates:
http://www.solr-start.com/


On 29 September 2016 at 16:53, Jan Høydahl  wrote:
> Hi,
>
> As a project I feel it is unnecessary to have 2933 unresolved issues in JIRA.
> These issues have an *average* age of 1019 days (2 years 9 months)!
>
> Several reasons why I think this is bad
> * Looks bad on public stats
> * We lose oversight, important issues drown
> * Reporters do not get closure
> * New developers get discouraged
> * We cannot use JIRA stats/reports as goals for our community
>
> Furthermore the time to first response is not uncommonly 90 days or more!
> I think we should strive to have no newly opened issues older than 7 days
> without at least having one response, question, +1/-1 etc. Perhaps with
> the exception of issues created by the committers or assigned to someone?
>
>
> I know some community members earlier have disagreed and feel it is not
> important to have JIRA reflect importance/priorities. I mean, we do not use
> features like votes, IN PROGRESS status etc in any meaningful way today.
>
> As we’ve onboarded many new committers last few years and I’d like to hear
> if you’re all happy with our (sloppy) Jira maintenance or if it is time to
> draw some new guidelines?
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 877 - Failure!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/877/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([3D5BA0E1E4823225:CA284EB9226A9DC3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1329)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  

[jira] [Commented] (SOLR-9583) When the same exists across multiple collections that are searched with an alias, the document returned in the results list is indeterminate

2016-09-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536477#comment-15536477
 ] 

Erick Erickson commented on SOLR-9583:
--

Fair points. I'll have to code-dive (and that's NOT happening today for various 
reasons) to say something competent, but I'd guess that we _already_ do 
something with facets and doc counts and the like. If you're saying that 
whatever we do is probably wrong, then it seems like we should fail in this 
case rather than let the users blissfully drive on. "Fail or do it right" 
maybe? Or return some kind of warning? Or.



> When the same  exists across multiple collections that are 
> searched with an alias, the document returned in the results list is 
> indeterminate
> 
>
> Key: SOLR-9583
> URL: https://issues.apache.org/jira/browse/SOLR-9583
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>
> Not quite sure whether to call this a bug or improvement...
> Currently if I have two collections C1 and C2 and an alias that points to 
> both _and_ I have a document in both collections with the _same_ , 
> the returned list  sometimes has the doc from C1 and sometimes from C2.
> If I add shards.info=true I see the document found in each collection, but 
> only one in the document list. Which one changes if I re-submit the identical 
> query.
> This seems incorrect, perhaps a side effect of piggy-backing the collection 
> aliasing on searching multiple shards? (Thanks Shalin for that bit of 
> background).
> I can see both use-cases: 
> 1>  aliasing multiple collections validly assumes that s should be 
> unique across them all and only one doc should be returned. Even in this case 
> which doc should be returned should be deterministic.
> 2> these are arbitrary collections without any a-priori relationship and 
> identical s do NOT identify the "same" document so both should be 
> returned.
> So I propose we do two things:
> a> provide a param for the CREATEALIAS command that controls whether docs 
> with the same  from different collections should both be returned. 
> If they both should, there's still the question of in what order.
> b> provide a deterministic way dups from different collections are resolved. 
> What that algorithm is I'm not quite sure. The order the collections were 
> specified in the CREATEALIAS command? Some field in the documents? Other??? 
> What happens if this option is not specified on the CREATEALIAS command?
> Implicit in the above is my assumption that it's perfectly valid to have 
> different aliases in the same cluster behave differently if specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9574) factor out AbstractReRankQuery class

2016-09-30 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-9574.
---
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.x

> factor out AbstractReRankQuery class
> 
>
> Key: SOLR-9574
> URL: https://issues.apache.org/jira/browse/SOLR-9574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: SOLR-9574.patch
>
>
> Motivation is to avoid unnecessary code duplication between 
> ReRankQParserPlugin and the SOLR-8542 plugin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9583) When the same exists across multiple collections that are searched with an alias, the document returned in the results list is indeterminate

2016-09-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536432#comment-15536432
 ] 

David Smiley commented on SOLR-9583:


Sorry Erick... I simply mean that, AFAIK, the distributed-search feature has a 
fundamental assumption that there are no keys duplicated across cores (shards). 
 AFAIK that fundamental assumption hasn't changed since its inception (Solr 
1.3?), despite SolrCloud & alias'ing.  If you violate that assumption... who 
knows what will happen -- "undefined".  I think attempting to support duplicate 
keys raises bigger questions than simply resolving the particular effects you 
report here.  For example faceting... I can't imagine the system efficiently 
deduplicating before counting.  Or even quite simply returning the matching doc 
count -- same thing.

> When the same  exists across multiple collections that are 
> searched with an alias, the document returned in the results list is 
> indeterminate
> 
>
> Key: SOLR-9583
> URL: https://issues.apache.org/jira/browse/SOLR-9583
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>
> Not quite sure whether to call this a bug or improvement...
> Currently if I have two collections C1 and C2 and an alias that points to 
> both _and_ I have a document in both collections with the _same_ , 
> the returned list  sometimes has the doc from C1 and sometimes from C2.
> If I add shards.info=true I see the document found in each collection, but 
> only one in the document list. Which one changes if I re-submit the identical 
> query.
> This seems incorrect, perhaps a side effect of piggy-backing the collection 
> aliasing on searching multiple shards? (Thanks Shalin for that bit of 
> background).
> I can see both use-cases: 
> 1>  aliasing multiple collections validly assumes that s should be 
> unique across them all and only one doc should be returned. Even in this case 
> which doc should be returned should be deterministic.
> 2> these are arbitrary collections without any a-priori relationship and 
> identical s do NOT identify the "same" document so both should be 
> returned.
> So I propose we do two things:
> a> provide a param for the CREATEALIAS command that controls whether docs 
> with the same  from different collections should both be returned. 
> If they both should, there's still the question of in what order.
> b> provide a deterministic way dups from different collections are resolved. 
> What that algorithm is I'm not quite sure. The order the collections were 
> specified in the CREATEALIAS command? Some field in the documents? Other??? 
> What happens if this option is not specified on the CREATEALIAS command?
> Implicit in the above is my assumption that it's perfectly valid to have 
> different aliases in the same cluster behave differently if specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9583) When the same exists across multiple collections that are searched with an alias, the document returned in the results list is indeterminate

2016-09-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536384#comment-15536384
 ] 

Erick Erickson commented on SOLR-9583:
--

[~dsmiley]]

I disagree and think there's a bug here. I can be persuaded that there are two 
issues though, maybe we can split this JIRA.

Bug:
In the situation I described above, we return one doc or the other, and 
currently it's indeterminate which one comes back. In fact, the one that comes 
back will change for the _exact_ same query without the underlying collections 
changing at all just by resubmitting the query (I turned the queryResultCache 
off and can reproduce at will). This is even true in a one-shard, leader-only 
pair of collections. You'll have to argue really hard to persuade me that this 
is correct behavior. It's certainly not satisfactory to say to a user "we have 
no idea which one will be returned and there's nothing you can do about it, 
don't even try".

bq: ...it's asking for trouble. Solr isn't supposed to be used this way.

I don't understand this. We allow collection aliasing. There are no rules 
whatsoever requiring multiple collections have disjoint s. 
Arbitrarily returning only one is hard to justify.

Wish:
We add the ability to return all docs with the same ID when multiple 
collections have docs with the same ID under control of some flag.


[~noble.paul]

Not quite sure I understand the question. We "dedupe" currently, but it's 
arbitrary. I doubt it was designed, rather "just happens" as a side-effect of 
merging the lists. My suspicion is that when we merge the results, the final 
result changes based on the order in which the collection returns are 
processed. But before diving into the code I wanted to get some idea of what we 
think _should_ happen.

We at least should dedupe in a predictable fashion. What the algorithm should 
be is up for discussion. Perhaps "doc from last collection listed in the alias 
wins" (yuck, frankly but at least I can explain it to someone). Or maybe "break 
ties by comparing the collection name" (also yuck). Or we have to use the sort 
criteria. Or I don't want to get complicated here, just predictable.

If we decide to return multiple docs with the same ID from separate collections 
then there's the whole question of how to sort them, but I'll leave that for 
another day. Maybe we just use whatever we use to dedupe as the sort in this 
case.

> When the same  exists across multiple collections that are 
> searched with an alias, the document returned in the results list is 
> indeterminate
> 
>
> Key: SOLR-9583
> URL: https://issues.apache.org/jira/browse/SOLR-9583
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>
> Not quite sure whether to call this a bug or improvement...
> Currently if I have two collections C1 and C2 and an alias that points to 
> both _and_ I have a document in both collections with the _same_ , 
> the returned list  sometimes has the doc from C1 and sometimes from C2.
> If I add shards.info=true I see the document found in each collection, but 
> only one in the document list. Which one changes if I re-submit the identical 
> query.
> This seems incorrect, perhaps a side effect of piggy-backing the collection 
> aliasing on searching multiple shards? (Thanks Shalin for that bit of 
> background).
> I can see both use-cases: 
> 1>  aliasing multiple collections validly assumes that s should be 
> unique across them all and only one doc should be returned. Even in this case 
> which doc should be returned should be deterministic.
> 2> these are arbitrary collections without any a-priori relationship and 
> identical s do NOT identify the "same" document so both should be 
> returned.
> So I propose we do two things:
> a> provide a param for the CREATEALIAS command that controls whether docs 
> with the same  from different collections should both be returned. 
> If they both should, there's still the question of in what order.
> b> provide a deterministic way dups from different collections are resolved. 
> What that algorithm is I'm not quite sure. The order the collections were 
> specified in the CREATEALIAS command? Some field in the documents? Other??? 
> What happens if this option is not specified on the CREATEALIAS command?
> Implicit in the above is my assumption that it's perfectly valid to have 
> different aliases in the same cluster behave differently if specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2016-09-30 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536312#comment-15536312
 ] 

Cassandra Targett commented on SOLR-9584:
-

This is a duplicate of SOLR-9000, isn't it? 

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Priority: Minor
>  Labels: patch
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6286) TestReplicationHandler.doTestReplicateAfterCoreReload failure on jenkins

2016-09-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536307#comment-15536307
 ] 

Steve Rowe commented on SOLR-6286:
--

This nightly 6.x failure from my Jenkins 
[http://jenkins.sarowe.net/job/Lucene-Solr-Nightly-6.x/168] reproduces for me, 
but only if I first remove {{-Dtests.method=doTestReplicateAfterCoreReload}} 
from the repro line - takes over 12 minutes on my server:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestReplicationHandler -Dtests.method=doTestReplicateAfterCoreReload 
-Dtests.seed=2E632BF89D2E9EC3 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=el-CY -Dtests.timezone=Asia/Ho_Chi_Minh -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE  441s J2 | 
TestReplicationHandler.doTestReplicateAfterCoreReload <<<
   [junit4]> Throwable #1: java.lang.AssertionError: 
expected:<[{indexVersion=1475239345178,generation=2,filelist=[_9by.fdt, 
_9by.fdx, _9by.fnm, _9by.nvd, _9by.nvm, _9by.si, _9by_FST50_0.doc, 
_9by_FST50_0.tfp, _ihg.cfe, _ihg.cfs, _ihg.si, _ms4.cfe, _ms4.cfs, _ms4.si, 
_ms5.cfe, _ms5.cfs, _ms5.si, _ms6.cfe, _ms6.cfs, _ms6.si, _ms7.cfe, _ms7.cfs, 
_ms7.si, _ms8.cfe, _ms8.cfs, _ms8.si, _ms9.cfe, _ms9.cfs, _ms9.si, _msa.cfe, 
_msa.cfs, _msa.si, _msb.cfe, _msb.cfs, _msb.si, _msc.cfe, _msc.cfs, _msc.si, 
_msd.cfe, _msd.cfs, _msd.si, _mse.cfe, _mse.cfs, _mse.si, _msf.cfe, _msf.cfs, 
_msf.si, _msg.cfe, _msg.cfs, _msg.si, _msh.cfe, _msh.cfs, _msh.si, _msi.cfe, 
_msi.cfs, _msi.si, _msj.cfe, _msj.cfs, _msj.si, _msk.cfe, _msk.cfs, _msk.si, 
_msl.cfe, _msl.cfs, _msl.si, _msm.cfe, _msm.cfs, _msm.si, _msn.cfe, _msn.cfs, 
_msn.si, _mso.cfe, _mso.cfs, _mso.si, _msp.cfe, _msp.cfs, _msp.si, _msq.cfe, 
_msq.cfs, _msq.si, _msr.cfe, _msr.cfs, _msr.si, _mss.cfe, _mss.cfs, _mss.si, 
_mst.cfe, _mst.cfs, _mst.si, _msu.cfe, _msu.cfs, _msu.si, _msv.cfe, _msv.cfs, 
_msv.si, _msw.cfe, _msw.cfs, _msw.si, _mtr.cfe, _mtr.cfs, _mtr.si, _mts.cfe, 
_mts.cfs, _mts.si, _mtt.cfe, _mtt.cfs, _mtt.si, _mtu.cfe, _mtu.cfs, _mtu.si, 
_mtv.cfe, _mtv.cfs, _mtv.si, _mtw.cfe, _mtw.cfs, _mtw.si, _mtx.cfe, _mtx.cfs, 
_mtx.si, _mty.cfe, _mty.cfs, _mty.si, _mtz.cfe, _mtz.cfs, _mtz.si, 
segments_2]}]> but 
was:<[{indexVersion=1475239345178,generation=2,filelist=[_9by.fdt, _9by.fdx, 
_9by.fnm, _9by.nvd, _9by.nvm, _9by.si, _9by_FST50_0.doc, _9by_FST50_0.tfp, 
_ihg.cfe, _ihg.cfs, _ihg.si, _ms4.cfe, _ms4.cfs, _ms4.si, _ms5.cfe, _ms5.cfs, 
_ms5.si, _ms6.cfe, _ms6.cfs, _ms6.si, _ms7.cfe, _ms7.cfs, _ms7.si, _ms8.cfe, 
_ms8.cfs, _ms8.si, _ms9.cfe, _ms9.cfs, _ms9.si, _msa.cfe, _msa.cfs, _msa.si, 
_msb.cfe, _msb.cfs, _msb.si, _msc.cfe, _msc.cfs, _msc.si, _msd.cfe, _msd.cfs, 
_msd.si, _mse.cfe, _mse.cfs, _mse.si, _msf.cfe, _msf.cfs, _msf.si, _msg.cfe, 
_msg.cfs, _msg.si, _msh.cfe, _msh.cfs, _msh.si, _msi.cfe, _msi.cfs, _msi.si, 
_msj.cfe, _msj.cfs, _msj.si, _msk.cfe, _msk.cfs, _msk.si, _msl.cfe, _msl.cfs, 
_msl.si, _msm.cfe, _msm.cfs, _msm.si, _msn.cfe, _msn.cfs, _msn.si, _mso.cfe, 
_mso.cfs, _mso.si, _msp.cfe, _msp.cfs, _msp.si, _msq.cfe, _msq.cfs, _msq.si, 
_msr.cfe, _msr.cfs, _msr.si, _mss.cfe, _mss.cfs, _mss.si, _mst.cfe, _mst.cfs, 
_mst.si, _msu.cfe, _msu.cfs, _msu.si, _msv.cfe, _msv.cfs, _msv.si, _msw.cfe, 
_msw.cfs, _msw.si, _mtr.cfe, _mtr.cfs, _mtr.si, _mts.cfe, _mts.cfs, _mts.si, 
_mtt.cfe, _mtt.cfs, _mtt.si, _mtu.cfe, _mtu.cfs, _mtu.si, _mtv.cfe, _mtv.cfs, 
_mtv.si, _mtw.cfe, _mtw.cfs, _mtw.si, _mtx.cfe, _mtx.cfs, _mtx.si, _mty.cfe, 
_mty.cfs, _mty.si, _mtz.cfe, _mtz.cfs, _mtz.si, segments_2]}, 
{indexVersion=1475239345178,generation=3,filelist=[_9by.fdt, _9by.fdx, 
_9by.fnm, _9by.nvd, _9by.nvm, _9by.si, _9by_FST50_0.doc, _9by_FST50_0.tfp, 
_ihg.cfe, _ihg.cfs, _ihg.si, _msy.cfe, _msy.cfs, _msy.si, _mtr.cfe, _mtr.cfs, 
_mtr.si, _mts.cfe, _mts.cfs, _mts.si, _mtt.cfe, _mtt.cfs, _mtt.si, _mtu.cfe, 
_mtu.cfs, _mtu.si, _mtv.cfe, _mtv.cfs, _mtv.si, _mtw.cfe, _mtw.cfs, _mtw.si, 
_mtx.cfe, _mtx.cfs, _mtx.si, _mty.cfe, _mty.cfs, _mty.si, _mtz.cfe, _mtz.cfs, 
_mtz.si, segments_3]}]>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([2E632BF89D2E9EC3:BB430C8ED6690C0]:0)
   [junit4]>at 
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload(TestReplicationHandler.java:1229)
{noformat}

> TestReplicationHandler.doTestReplicateAfterCoreReload failure on jenkins
> 
>
> Key: SOLR-6286
> URL: https://issues.apache.org/jira/browse/SOLR-6286
> Project: Solr
>  Issue Type: Bug
>  Components: Tests
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.10, 6.0
>
>
> There have been a few failures on jenkins.
> {code}
> 3 tests failed.
> REGRESSION:  
> 

[jira] [Closed] (SOLR-3227) Solr Cloud should continue working when a logical shard goes down

2016-09-30 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-3227.
---
Resolution: Won't Fix

> Solr Cloud should continue working when a logical shard goes down
> -
>
> Key: SOLR-3227
> URL: https://issues.apache.org/jira/browse/SOLR-3227
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 4.0-ALPHA
>Reporter: Ranjan Bagchi
>
> I can start up a SolrCloud instance up one instance w/ zookeeper, and started 
> a second instance defining a shard name in solr.xml, and the second shard 
> shows up in zookeeper and both indexes are searchable.
> However, if I bring the second server down -- the first one stops working 
> until I restart server #2.
> The desired behavior is that SolrCloud deregisters server #2 and the cloud 
> remains searchable with only server #1's index.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3071) When you change the config for a collection, all nodes in the collection should reload their SolrCore.

2016-09-30 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett closed SOLR-3071.
---
   Resolution: Duplicate
Fix Version/s: (was: 6.0)
   (was: 4.9)

Not exactly a duplicate of SOLR-5200, but that's the closest resolution to the 
idea that this is superceded by that issue.

> When you change the config for a collection, all nodes in the collection 
> should reload their SolrCore.
> --
>
> Key: SOLR-3071
> URL: https://issues.apache.org/jira/browse/SOLR-3071
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Priority: Minor
>
> This will make it much easier to make configuration updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5043) hostname lookup in SystemInfoHandler should be refactored so it's possible to not block core (re)load for long periouds on misconfigured systems

2016-09-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536276#comment-15536276
 ] 

Robert Krüger commented on SOLR-5043:
-

Works, thanks a lot! 

The only question I have is why you don't set the host name to the inet address 
(or maybe even the result of getHostName?) in the case when the DNS lookup is 
suppressed. For admins looking into the logs this is still better than not 
having a host name in there and that should not block at all. The issue was 
with getCanonicalHostName making the reverse DNS lookup that caused the system 
to hang. Your call. Since I don't really care about host name in the logs, it 
fixes my problem as is but I thought I'd at least point out the possibility.

> hostname lookup in SystemInfoHandler should be refactored so it's possible to 
> not block core (re)load for long periouds on misconfigured systems
> 
>
> Key: SOLR-5043
> URL: https://issues.apache.org/jira/browse/SOLR-5043
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
> Attachments: SOLR-5043-lazy.patch, SOLR-5043.patch, SOLR-5043.patch
>
>
> SystemInfoHandler currently lookups the hostname of the machine on it's init, 
> and caches for it's lifecycle -- there is a comment to the effect that the 
> reason for this is because on some machines (notably ones with wacky DNS 
> settings) looking up the hostname can take a long ass time in some JVMs...
> {noformat}
>   // on some platforms, resolving canonical hostname can cause the thread
>   // to block for several seconds if nameservices aren't available
>   // so resolve this once per handler instance 
>   //(ie: not static, so core reload will refresh)
> {noformat}
> But as we move forward with a lot more multi-core, solr-cloud, dynamically 
> updated instances, even paying this cost per core-reload is expensive.
> we should refactoring this so that SystemInfoHandler instances init 
> immediately, with some kind of lazy loading of the hostname info in a 
> background thread, (especially since hte only real point of having that info 
> here is for UI use so you cna keep track of what machine you are looking at)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2016-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536261#comment-15536261
 ] 

ASF GitHub Bot commented on SOLR-9584:
--

GitHub user zyjibmcn opened a pull request:

https://github.com/apache/lucene-solr/pull/86

SOLR-9584 - use relative URL path instead of absolute path starting from 
/solr for angularjs services



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zyjibmcn/lucene-solr master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/86.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #86


commit 19e6f0fed60a71f83d919b8c22ad6ed8ca72958a
Author: Yun Jie Zhou 
Date:   2016-09-30T15:18:48Z

use relative URL path instead of absolute path starting from /solr




> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Priority: Minor
>  Labels: patch
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #86: SOLR-9584 - use relative URL path instead of a...

2016-09-30 Thread zyjibmcn
GitHub user zyjibmcn opened a pull request:

https://github.com/apache/lucene-solr/pull/86

SOLR-9584 - use relative URL path instead of absolute path starting from 
/solr for angularjs services



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zyjibmcn/lucene-solr master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/86.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #86


commit 19e6f0fed60a71f83d919b8c22ad6ed8ca72958a
Author: Yun Jie Zhou 
Date:   2016-09-30T15:18:48Z

use relative URL path instead of absolute path starting from /solr




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9574) factor out AbstractReRankQuery class

2016-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536246#comment-15536246
 ] 

ASF subversion and git services commented on SOLR-9574:
---

Commit 031de301c211164572bd52925c668cfed01927aa in lucene-solr's branch 
refs/heads/branch_6x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=031de30 ]

SOLR-9574: Factor out AbstractReRankQuery from ReRankQParserPlugin's private 
ReRankQuery.


> factor out AbstractReRankQuery class
> 
>
> Key: SOLR-9574
> URL: https://issues.apache.org/jira/browse/SOLR-9574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9574.patch
>
>
> Motivation is to avoid unnecessary code duplication between 
> ReRankQParserPlugin and the SOLR-8542 plugin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2016-09-30 Thread Yun Jie Zhou (JIRA)
Yun Jie Zhou created SOLR-9584:
--

 Summary: The absolute URL path in 
server/solr-webapp/webapp/js/angular/services.js would make context 
customization not work
 Key: SOLR-9584
 URL: https://issues.apache.org/jira/browse/SOLR-9584
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Server
Affects Versions: 6.2
Reporter: Yun Jie Zhou
Priority: Minor


The absolute path starting from /solr in 
server/solr-webapp/webapp/js/angular/services.js would make the context 
customization not work.

For example, we should use $resource('admin/info/system', {"wt":"json", 
"_":Date.now()}); instead of $resource('/solr/admin/info/system', {"wt":"json", 
"_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



JSON Facet "allBuckets" behavior

2016-09-30 Thread Karthik Ramachandran
While performing json faceting with "allBuckets" and "mincount", I not sure if 
I am expecting a wrong result or there is bug?

By "allBucket" definition the response, representing the union of all of the 
buckets.

Schema:




Dataset:
  1filename11
  2filename21
  3filename31
  4filename41
  5filename51
  6filename11
  7filename21
  8filename31
  9filename41
  10filename11
  11filename21
  12filename31
  13filename11
  14filename21
  15filename11

For my dataset, with request
http://localhost:8983/solr/jasonfacettest/select/?q=*:*=0= 
{"sumOfDuplicates":{"type":"terms","field":"filename","mincount":2,"numBuckets":true,"allBuckets":true,"sort":"sum
 desc","facet":{"sum":"sum(size)"}}}

below is the response,
"response":{"numFound":15,"start":0,"docs":[]},"facets":{"count":15,"sumOfDuplicates":{"numBuckets":4,"allBuckets":{"count":15,"sum":15.0},"buckets":[{"val":"filename1","count":5,"sum":5.0},{"val":"filename2","count":4,"sum":4.0},{"val":"filename3","count":3,"sum":3.0},{"val":"filename4","count":2,"sum":2.0}]}}}

I was wonder, why the result is not this, since I have "mincount:2"
"response":{"numFound":15,"start":0,"docs":[]},"facets":{"count":15,"sumOfDuplicates":{"numBuckets":4,"allBuckets":{"count":14,"sum":14.0},"buckets":[{"val":"filename1","count":5,"sum":5.0},{"val":"filename2","count":4,"sum":4.0},{"val":"filename3","count":3,"sum":3.0},{"val":"filename4","count":2,"sum":2.0}]}}}

Thanks for the help!!

With Thanks & Regards
Karthik Ramachandran
P Please don't print this e-mail unless you really need to



[jira] [Updated] (LUCENE-7467) Caused by: java.lang.IllegalArgumentException: position increments (and gaps) must be >= 0 (got 65248) for field 'tf_attachments_field_library_attachments'

2016-09-30 Thread adeppa (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

adeppa updated LUCENE-7467:
---
Attachment: AutoPhrasingTokenFilterFactory.java
AutoPhrasingTokenFilter.java

> Caused by: java.lang.IllegalArgumentException: position increments (and gaps) 
> must be >= 0 (got 65248) for field 'tf_attachments_field_library_attachments'
> ---
>
> Key: LUCENE-7467
> URL: https://issues.apache.org/jira/browse/LUCENE-7467
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs, core/other, core/store, modules/analysis
>Affects Versions: 5.1
> Environment: Mac 
>Reporter: adeppa
> Attachments: AutoPhrasingTokenFilter.java, 
> AutoPhrasingTokenFilterFactory.java
>
>
> I was try to indexing the large file like PDF,PPT,PPTX and XLXS,XLX,
> actual token count are 65248 but this error is coming from the 
> DefaultIndexingChain class while executing the public void 
> invert(IndexableField field, boolean first) throws IOException, 
> AbortingException in side this method we are checking the one condition like 
> *if (invertState.position < invertState.lastPosition)* here 
> invertState.position value become negative, when it increases int.MAX_VALUE,
> org.apache.solr.common.SolrException: Exception writing document id 
> dc65t0-marketing_site-141457 to the index; possible analysis error.
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:167)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
>   at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
>   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
>   at 
> org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:528)
>   at 
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1099)
>   at 
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:672)
>   at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1520)
>   at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1476)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: 

[jira] [Updated] (LUCENE-7467) Caused by: java.lang.IllegalArgumentException: position increments (and gaps) must be >= 0 (got 65248) for field 'tf_attachments_field_library_attachments'

2016-09-30 Thread adeppa (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

adeppa updated LUCENE-7467:
---
Attachment: (was: AutoPhrasingTokenFilter.java)

> Caused by: java.lang.IllegalArgumentException: position increments (and gaps) 
> must be >= 0 (got 65248) for field 'tf_attachments_field_library_attachments'
> ---
>
> Key: LUCENE-7467
> URL: https://issues.apache.org/jira/browse/LUCENE-7467
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs, core/other, core/store, modules/analysis
>Affects Versions: 5.1
> Environment: Mac 
>Reporter: adeppa
> Attachments: AutoPhrasingTokenFilter.java, 
> AutoPhrasingTokenFilterFactory.java
>
>
> I was try to indexing the large file like PDF,PPT,PPTX and XLXS,XLX,
> actual token count are 65248 but this error is coming from the 
> DefaultIndexingChain class while executing the public void 
> invert(IndexableField field, boolean first) throws IOException, 
> AbortingException in side this method we are checking the one condition like 
> *if (invertState.position < invertState.lastPosition)* here 
> invertState.position value become negative, when it increases int.MAX_VALUE,
> org.apache.solr.common.SolrException: Exception writing document id 
> dc65t0-marketing_site-141457 to the index; possible analysis error.
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:167)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
>   at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
>   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
>   at 
> org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:528)
>   at 
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1099)
>   at 
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:672)
>   at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1520)
>   at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1476)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: 

[jira] [Updated] (LUCENE-7467) Caused by: java.lang.IllegalArgumentException: position increments (and gaps) must be >= 0 (got 65248) for field 'tf_attachments_field_library_attachments'

2016-09-30 Thread adeppa (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

adeppa updated LUCENE-7467:
---
Attachment: (was: AutoPhrasingTokenFilterFactory.java)

> Caused by: java.lang.IllegalArgumentException: position increments (and gaps) 
> must be >= 0 (got 65248) for field 'tf_attachments_field_library_attachments'
> ---
>
> Key: LUCENE-7467
> URL: https://issues.apache.org/jira/browse/LUCENE-7467
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs, core/other, core/store, modules/analysis
>Affects Versions: 5.1
> Environment: Mac 
>Reporter: adeppa
> Attachments: AutoPhrasingTokenFilter.java, 
> AutoPhrasingTokenFilterFactory.java
>
>
> I was try to indexing the large file like PDF,PPT,PPTX and XLXS,XLX,
> actual token count are 65248 but this error is coming from the 
> DefaultIndexingChain class while executing the public void 
> invert(IndexableField field, boolean first) throws IOException, 
> AbortingException in side this method we are checking the one condition like 
> *if (invertState.position < invertState.lastPosition)* here 
> invertState.position value become negative, when it increases int.MAX_VALUE,
> org.apache.solr.common.SolrException: Exception writing document id 
> dc65t0-marketing_site-141457 to the index; possible analysis error.
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:167)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
>   at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
>   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
>   at 
> org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:528)
>   at 
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1099)
>   at 
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:672)
>   at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1520)
>   at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1476)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: 

[jira] [Updated] (LUCENE-7467) Caused by: java.lang.IllegalArgumentException: position increments (and gaps) must be >= 0 (got 65248) for field 'tf_attachments_field_library_attachments'

2016-09-30 Thread adeppa (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

adeppa updated LUCENE-7467:
---
Attachment: AutoPhrasingTokenFilterFactory.java
AutoPhrasingTokenFilter.java

> Caused by: java.lang.IllegalArgumentException: position increments (and gaps) 
> must be >= 0 (got 65248) for field 'tf_attachments_field_library_attachments'
> ---
>
> Key: LUCENE-7467
> URL: https://issues.apache.org/jira/browse/LUCENE-7467
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs, core/other, core/store, modules/analysis
>Affects Versions: 5.1
> Environment: Mac 
>Reporter: adeppa
> Attachments: AutoPhrasingTokenFilter.java, 
> AutoPhrasingTokenFilterFactory.java
>
>
> I was try to indexing the large file like PDF,PPT,PPTX and XLXS,XLX,
> actual token count are 65248 but this error is coming from the 
> DefaultIndexingChain class while executing the public void 
> invert(IndexableField field, boolean first) throws IOException, 
> AbortingException in side this method we are checking the one condition like 
> *if (invertState.position < invertState.lastPosition)* here 
> invertState.position value become negative, when it increases int.MAX_VALUE,
> org.apache.solr.common.SolrException: Exception writing document id 
> dc65t0-marketing_site-141457 to the index; possible analysis error.
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:167)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
>   at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
>   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
>   at 
> org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:528)
>   at 
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1099)
>   at 
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:672)
>   at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1520)
>   at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1476)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: 

[jira] [Resolved] (SOLR-8811) Deleting inactive replicas doesn't work in non-legacy cloud mode

2016-09-30 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved SOLR-8811.
-
Resolution: Won't Fix

I think I just misunderstood what this test was supposed to be doing.  
DeleteInactiveReplicaTest has been converted to SolrCloudTestCase in SOLR-9132.

> Deleting inactive replicas doesn't work in non-legacy cloud mode
> 
>
> Key: SOLR-8811
> URL: https://issues.apache.org/jira/browse/SOLR-8811
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5, 6.0
>Reporter: Alan Woodward
>Priority: Minor
> Attachments: SOLR-8811.patch
>
>
> I've been trying to cut some tests over to use SolrCloudTestBase, and have 
> found that DeleteInactiveReplicaTest won't work in non-legacy mode.  This is 
> because the "don't start up if you've been unregistered" logic relies on a 
> coreNodeName not being present, but if the collection has been created via 
> the Collections API then coreNodeName is always set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9132) Cut over AbstractDistribZkTestCase tests to SolrCloudTestCase

2016-09-30 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-9132:

Attachment: SOLR-9132-rules.patch

Next batch, cutting over the following tests:
* AliasIntegrationTest
* AsyncCallRequestStatusResponseTest
* CollectionReloadTest
* CollectionStateFormat2Test
* MigrateRouteKeyTest
* AsyncMigrateRouteKeyTest (folded into the above)
* RulesTest

It also:
* adds static methods to CollectionAdminRequest to help creating collections 
using the default config, and collections using the implicit router
* adds getCoreStartTime to the CoreStatus object
* adds a method to MiniSolrCloudCluster to expire the ZK connection to a 
specific jetty

> Cut over AbstractDistribZkTestCase tests to SolrCloudTestCase
> -
>
> Key: SOLR-9132
> URL: https://issues.apache.org/jira/browse/SOLR-9132
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: SOLR-9132-deletereplicas.patch, SOLR-9132-rules.patch, 
> SOLR-9132.patch
>
>
> We need to remove AbstractDistribZkTestCase if we want to move away from 
> legacy cloud configurations.  This issue is for migrating tests to 
> SolrCloudTestCase instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8495) Schemaless mode cannot index large text fields

2016-09-30 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536180#comment-15536180
 ] 

Cao Manh Dat commented on SOLR-8495:


Ok, so we will wait for SOLR-9526 get commited before continue working on this 
issue.

> Schemaless mode cannot index large text fields
> --
>
> Key: SOLR-8495
> URL: https://issues.apache.org/jira/browse/SOLR-8495
> Project: Solr
>  Issue Type: Bug
>  Components: Data-driven Schema, Schema and Analysis
>Affects Versions: 4.10.4, 5.3.1, 5.4
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, impact-medium
> Fix For: 5.5, 6.0
>
> Attachments: SOLR-8495.patch
>
>
> The schemaless mode by default indexes all string fields into an indexed 
> StrField which is limited to 32KB text. Anything larger than that leads to an 
> exception during analysis.
> {code}
> Caused by: java.lang.IllegalArgumentException: Document contains at least one 
> immense term in field="text" (whose UTF8 encoding is longer than the max 
> length 32766)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5041) Add a test to make sure that a leader always recovers from log on startup

2016-09-30 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5041.
-
   Resolution: Fixed
Fix Version/s: (was: 6.0)
   (was: 4.9)
   master (7.0)
   6.3

Thanks Dat!

> Add a test to make sure that a leader always recovers from log on startup
> -
>
> Key: SOLR-5041
> URL: https://issues.apache.org/jira/browse/SOLR-5041
> Project: Solr
>  Issue Type: Test
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-5041.patch, SOLR-5401.patch
>
>
> From my comment on SOLR-4997:
> bq. I fixed a bug that I had introduced which skipped log recovery on startup 
> for all leaders instead of only sub shard leaders. I caught this only because 
> I was doing another line-by-line review of all my changes. We should have a 
> test which catches such a condition.
> Add a test which tests that leaders always recover from log on startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5041) Add a test to make sure that a leader always recovers from log on startup

2016-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536143#comment-15536143
 ] 

ASF subversion and git services commented on SOLR-5041:
---

Commit f8a4ccf97bea23446e3d323a4698b097aeff0068 in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f8a4ccf ]

SOLR-5041: Add a test to make sure that a leader always recovers from log on 
startup

(cherry picked from commit 7a8ff69)


> Add a test to make sure that a leader always recovers from log on startup
> -
>
> Key: SOLR-5041
> URL: https://issues.apache.org/jira/browse/SOLR-5041
> Project: Solr
>  Issue Type: Test
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-5041.patch, SOLR-5401.patch
>
>
> From my comment on SOLR-4997:
> bq. I fixed a bug that I had introduced which skipped log recovery on startup 
> for all leaders instead of only sub shard leaders. I caught this only because 
> I was doing another line-by-line review of all my changes. We should have a 
> test which catches such a condition.
> Add a test which tests that leaders always recover from log on startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5041) Add a test to make sure that a leader always recovers from log on startup

2016-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536139#comment-15536139
 ] 

ASF subversion and git services commented on SOLR-5041:
---

Commit 7a8ff69316809231e20883d5d45376bafb8f1262 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7a8ff69 ]

SOLR-5041: Add a test to make sure that a leader always recovers from log on 
startup


> Add a test to make sure that a leader always recovers from log on startup
> -
>
> Key: SOLR-5041
> URL: https://issues.apache.org/jira/browse/SOLR-5041
> Project: Solr
>  Issue Type: Test
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-5041.patch, SOLR-5401.patch
>
>
> From my comment on SOLR-4997:
> bq. I fixed a bug that I had introduced which skipped log recovery on startup 
> for all leaders instead of only sub shard leaders. I caught this only because 
> I was doing another line-by-line review of all my changes. We should have a 
> test which catches such a condition.
> Add a test which tests that leaders always recover from log on startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9574) factor out AbstractReRankQuery class

2016-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536097#comment-15536097
 ] 

ASF subversion and git services commented on SOLR-9574:
---

Commit dbc29c0adc232636d442c6726ae27f07bdbf75e3 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dbc29c0 ]

SOLR-9574: Factor out AbstractReRankQuery from ReRankQParserPlugin's private 
ReRankQuery.


> factor out AbstractReRankQuery class
> 
>
> Key: SOLR-9574
> URL: https://issues.apache.org/jira/browse/SOLR-9574
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9574.patch
>
>
> Motivation is to avoid unnecessary code duplication between 
> ReRankQParserPlugin and the SOLR-8542 plugin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7467) Caused by: java.lang.IllegalArgumentException: position increments (and gaps) must be >= 0 (got 65248) for field 'tf_attachments_field_library_attachments'

2016-09-30 Thread adeppa (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536039#comment-15536039
 ] 

adeppa commented on LUCENE-7467:


Thanks for the reply ,
I am using the currently AutoPhrasingTokenFilter from third party i.e 
lucidworks  ,
for the more details contains below link 
http://lucidworks.com/blog/2014/07/12/solution-for-multi-term-synonyms-in-lucenesolr-using-the-auto-phrasing-tokenfilter/
source code 
https://github.com/LucidWorks/auto-phrase-tokenfilter
please can you help me respective change for this 

> Caused by: java.lang.IllegalArgumentException: position increments (and gaps) 
> must be >= 0 (got 65248) for field 'tf_attachments_field_library_attachments'
> ---
>
> Key: LUCENE-7467
> URL: https://issues.apache.org/jira/browse/LUCENE-7467
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs, core/other, core/store, modules/analysis
>Affects Versions: 5.1
> Environment: Mac 
>Reporter: adeppa
>
> I was try to indexing the large file like PDF,PPT,PPTX and XLXS,XLX,
> actual token count are 65248 but this error is coming from the 
> DefaultIndexingChain class while executing the public void 
> invert(IndexableField field, boolean first) throws IOException, 
> AbortingException in side this method we are checking the one condition like 
> *if (invertState.position < invertState.lastPosition)* here 
> invertState.position value become negative, when it increases int.MAX_VALUE,
> org.apache.solr.common.SolrException: Exception writing document id 
> dc65t0-marketing_site-141457 to the index; possible analysis error.
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:167)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
>   at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
>   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
>   at 
> org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:616)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:528)
>   at 
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1099)
>   at 
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:672)
>   at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1520)
>   at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1476)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3574 - Unstable!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3574/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
[/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_FB653A1B3398FD65-001/solr-instance-020/./collection1/data/index.20160930054028125,
 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_FB653A1B3398FD65-001/solr-instance-020/./collection1/data,
 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_FB653A1B3398FD65-001/solr-instance-020/./collection1/data/index.20160930054028033,
 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_FB653A1B3398FD65-001/solr-instance-020/./collection1/data/snapshot_metadata]
 expected:<3> but was:<4>

Stack Trace:
java.lang.AssertionError: 
[/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_FB653A1B3398FD65-001/solr-instance-020/./collection1/data/index.20160930054028125,
 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_FB653A1B3398FD65-001/solr-instance-020/./collection1/data,
 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_FB653A1B3398FD65-001/solr-instance-020/./collection1/data/index.20160930054028033,
 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1/temp/solr.handler.TestReplicationHandler_FB653A1B3398FD65-001/solr-instance-020/./collection1/data/snapshot_metadata]
 expected:<3> but was:<4>
at 
__randomizedtesting.SeedInfo.seed([FB653A1B3398FD65:C16D443F5705283]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:902)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1334)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Reopened] (SOLR-2852) SolrJ doesn't need woodstox jar

2016-09-30 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reopened SOLR-2852:


I think the time for this has come.  What's changed is increased usage of 
Solr's "javabin" format -- it's the default, not XML.  Lets reduce the 
dependencies users _think_ they need (they don't *actually* need Woodstox but 
they don't even know that).

> SolrJ doesn't need woodstox jar
> ---
>
> Key: SOLR-2852
> URL: https://issues.apache.org/jira/browse/SOLR-2852
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Reporter: David Smiley
>Priority: Trivial
>
> The /dist/solrj-lib/ directory contains wstx-asl-3.2.7.jar (Woodstox StAX 
> API).  SolrJ doesn't actually have any type of dependency on this library. 
> The maven build doesn't have it as a dependency and the tests pass.  Perhaps 
> Woodstox is faster than the JDK's StAX, I don't know, but I find that point 
> quite moot since SolrJ can use the efficient binary format.  Woodstox is not 
> a small library either, weighting in at 524KB, and of course if someone 
> actually wants to use it, they can.
> I propose woodstox be removed as a SolrJ dependency.  I am *not* proposing it 
> be removed as a Solr WAR dependency since it is actually required there due 
> to an obscure XSLT issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9470) Deadlocked threads in recovery

2016-09-30 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden closed SOLR-9470.
--
Resolution: Duplicate

Closing as duplicate of SOLR-9278

> Deadlocked threads in recovery
> --
>
> Key: SOLR-9470
> URL: https://issues.apache.org/jira/browse/SOLR-9470
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
>Reporter: Michael Braun
> Attachments: solr-deadlock-2-r.txt, solr-deadlock.txt
>
>
> Background: Booted up a cluster and replicas were in recovery. All replicas 
> recovered minus one, and it was hanging on HTTP requests. Issued shutdown and 
> solr would not shut down. Examined with JStack and found a deadlock had 
> occurred. The relevant thread information is attached. Some information has 
> been redacted as well (some custom URPs, IPs) from the stack traces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5064) Update to a more recent version.

2016-09-30 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535999#comment-15535999
 ] 

Varun Thacker commented on SOLR-5064:
-

Some more discussion from the past : 
http://mail-archives.apache.org/mod_mbox/lucene-dev/201508.mbox/%3ccabewpvgruhxamobpn1bvvkdbkd4-2v593r127rvcwqasxuw...@mail.gmail.com%3E

Let's just remove it then?

> Update  transitive="false"/> to a more recent version.
> -
>
> Key: SOLR-5064
> URL: https://issues.apache.org/jira/browse/SOLR-5064
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Priority: Minor
>
> @whoschek mentioned to me earlier that we were using a fairly old version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8706) Add SMSStream to support SMS messaging

2016-09-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536001#comment-15536001
 ] 

David Smiley commented on SOLR-8706:


-0
IMO SMS handling is a bit specialized to be in Solr core.

> Add SMSStream to support SMS messaging
> --
>
> Key: SOLR-8706
> URL: https://issues.apache.org/jira/browse/SOLR-8706
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>
> With the TopicStream (SOLR-8588) wrapping up it makes sense to implement an 
> SMS Stream that wraps the TopicStream and pushes SMS messages. 
> Research needs to be done to determine the right SMS library. More details to 
> follow on syntax as the this ticket develops.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5064) Update to a more recent version.

2016-09-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535985#comment-15535985
 ] 

David Smiley commented on SOLR-5064:


SOLR-2852  and some old Java 1.6 slow performance noted here: 
http://www.mail-archive.com/users@cxf.apache.org/msg12750.html

> Update  transitive="false"/> to a more recent version.
> -
>
> Key: SOLR-5064
> URL: https://issues.apache.org/jira/browse/SOLR-5064
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Priority: Minor
>
> @whoschek mentioned to me earlier that we were using a fairly old version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5064) Update to a more recent version.

2016-09-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535974#comment-15535974
 ] 

David Smiley commented on SOLR-5064:


BTW; just wanted to mention that the last time I looked, Woodstox is *not* in 
fact actually explicitly used by SolrJ; it is used indirectly simply by being 
on the classpath.  But Java comes with implementations of StAX, so Woodstox is 
not actually needed.  Once upon a time, like back in Java 1.6 (or maybe it was 
1.5), somebody here saw that Woodstox was a lot faster than Java's impl, and so 
it was included.  I suspect Java's impl has improved a lot since then.  I think 
it's very dubious to include this dependency in SolrJ, particularly with our 
Javabin default codec.  SolrJ should have minimal dependencies because every 
Solr client out there has to use it and integrate with *their* dependencies.

> Update  transitive="false"/> to a more recent version.
> -
>
> Key: SOLR-5064
> URL: https://issues.apache.org/jira/browse/SOLR-5064
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Priority: Minor
>
> @whoschek mentioned to me earlier that we were using a fairly old version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9583) When the same exists across multiple collections that are searched with an alias, the document returned in the results list is indeterminate

2016-09-30 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-9583:
---
Issue Type: Wish  (was: Bug)

This isn't a bug -- it's asking for trouble.  Solr isn't supposed to be used 
this way.  So I changed this to a "wish"; if some day we actually make this 
work.

> When the same  exists across multiple collections that are 
> searched with an alias, the document returned in the results list is 
> indeterminate
> 
>
> Key: SOLR-9583
> URL: https://issues.apache.org/jira/browse/SOLR-9583
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>
> Not quite sure whether to call this a bug or improvement...
> Currently if I have two collections C1 and C2 and an alias that points to 
> both _and_ I have a document in both collections with the _same_ , 
> the returned list  sometimes has the doc from C1 and sometimes from C2.
> If I add shards.info=true I see the document found in each collection, but 
> only one in the document list. Which one changes if I re-submit the identical 
> query.
> This seems incorrect, perhaps a side effect of piggy-backing the collection 
> aliasing on searching multiple shards? (Thanks Shalin for that bit of 
> background).
> I can see both use-cases: 
> 1>  aliasing multiple collections validly assumes that s should be 
> unique across them all and only one doc should be returned. Even in this case 
> which doc should be returned should be deterministic.
> 2> these are arbitrary collections without any a-priori relationship and 
> identical s do NOT identify the "same" document so both should be 
> returned.
> So I propose we do two things:
> a> provide a param for the CREATEALIAS command that controls whether docs 
> with the same  from different collections should both be returned. 
> If they both should, there's still the question of in what order.
> b> provide a deterministic way dups from different collections are resolved. 
> What that algorithm is I'm not quite sure. The order the collections were 
> specified in the CREATEALIAS command? Some field in the documents? Other??? 
> What happens if this option is not specified on the CREATEALIAS command?
> Implicit in the above is my assumption that it's perfectly valid to have 
> different aliases in the same cluster behave differently if specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7470) Ensure Lucene sources don't swallow root cause exceptions

2016-09-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535934#comment-15535934
 ] 

David Smiley commented on LUCENE-7470:
--

This is awesome; nice work Rob!  My pet peeve bug pattern is ignoring/masking 
root exceptions -- happens all too often.  I wish javac had this built in.

If this is added to Lucene, I guess SpecialMethod.java with it's static list of 
known boxing methods etc. would need to load it's list from perhaps a simple 
file on the classpath?

I wonder if this might go to some place where it would be used more broadly?  
e.g. FindBugs  After all... why should only Lucene have all the fun?

> Ensure Lucene sources don't swallow root cause exceptions
> -
>
> Key: LUCENE-7470
> URL: https://issues.apache.org/jira/browse/LUCENE-7470
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>
> [~rcmuir] created a nice tool to look at the Java bytecode to determine 
> whether  e.g. a {{finally}} or {{catch}} clause may ignore the original root 
> cause exception, here: 
> https://github.com/rmuir/elasticsearch/tree/catchAnalyzer
> It's a fork of ES but I think maybe we can extract it and use it in Lucene.
> Unlike Python, Java unfortunately does not seem to have safeguards against 
> exceptionally handling code accidentally losing the original exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read

2016-09-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535876#comment-15535876
 ] 

Noble Paul commented on SOLR-8146:
--

[~susheel2...@gmail.com] you can take up my patch and continue with that

> Allowing SolrJ CloudSolrClient to have preferred replica for query/read
> ---
>
> Key: SOLR-8146
> URL: https://issues.apache.org/jira/browse/SOLR-8146
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 5.3
>Reporter: Arcadius Ahouansou
> Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch, 
> SOLR-8146.patch
>
>
> h2. Backgrouds
> Currently, the CloudSolrClient randomly picks a replica to query.
> This is done by shuffling the list of live URLs to query then, picking the 
> first item from the list.
> This ticket is to allow more flexibility and control to some extend which 
> URLs will be picked up for queries.
> Note that this is for queries only and would not affect update/delete/admin 
> operations.
> h2. Implementation
> The current patch uses regex pattern and moves to the top of the list of URLs 
> only those matching the given regex specified by the system property 
> {code}solr.preferredQueryNodePattern{code}
> Initially, I thought it may be good to have Solr nodes tagged with a string 
> pattern (snitch?) and use that pattern for matching the URLs.
> Any comment, recommendation or feedback would be appreciated.
> h2. Use Cases
> There are many cases where the ability to choose the node where queries go 
> can be very handy:
> h3. Special node for manual user queries and analytics
> One may have a SolrCLoud cluster where every node host the same set of 
> collections with:  
> - multiple large SolrCLoud nodes (L) used for production apps and 
> - have 1 small node (S) in the same cluster with less ram/cpu used only for 
> manual user queries, data export and other production issue investigation.
> This ticket would allow to configure the applications using SolrJ to query 
> only the (L) nodes
> This use case is similar to the one described in SOLR-5501 raised by [~manuel 
> lenormand]
> h3. Minimizing network traffic
>  
> For simplicity, let's say that we have  a SolrSloud cluster deployed on 2 (or 
> N) separate racks: rack1 and rack2.
> On each rack, we have a set of SolrCloud VMs as well as a couple of client 
> VMs querying solr using SolrJ.
> All solr nodes are identical and have the same number of collections.
> What we would like to achieve is:
> - clients on rack1 will by preference query only SolrCloud nodes on rack1, 
> and 
> - clients on rack2 will by preference query only SolrCloud nodes on rack2.
> - Cross-rack read will happen if and only if one of the racks has no 
> available Solr node to serve a request.
> In other words, we want read operations to be local to a rack whenever 
> possible.
> Note that write/update/delete/admin operations should not be affected.
> Note that in our use case, we have a cross DC deployment. So, replace 
> rack1/rack2 by DC1/DC2
> Any comment would be very appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6688) There should be no error about a non-required file admin-extra.html

2016-09-30 Thread Gethin James (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535856#comment-15535856
 ] 

Gethin James commented on SOLR-6688:


Are you, at least, going to remove the "admin-extra" div from the screen? Its 
rather misleading if it can't be used.

> There should be no error about a non-required file admin-extra.html
> ---
>
> Key: SOLR-6688
> URL: https://issues.apache.org/jira/browse/SOLR-6688
> Project: Solr
>  Issue Type: Bug
>Reporter: Arkadiusz Robiński
>Priority: Minor
>
> I am using SOLR 4.10.1. I have a simple configuration with 2 cores. Every 
> time I open the SOLR admin interface, I get the following errors:
> {noformat}
> Can not find: admin-extra.html
> Can not find: admin-extra.menu-top.html
> Can not find: admin-extra.menu-bottom.html
> {noformat}
> As far as I know, the files are optional. Therefore I should not get any 
> error, not even a warning.
> I do not want to create the files because I do not need them. The error 
> should simply not be there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_102) - Build # 485 - Unstable!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/485/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([A8A44DC887A6E43E:51E9DE67BBD3A9B4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit(ShardSplitTest.java:283)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 588 - Still Failing

2016-09-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/588/

No tests ran.

Build Log:
[...truncated 40572 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (24.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 29.9 MB in 0.04 sec (793.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 64.4 MB in 0.17 sec (374.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 75.0 MB in 0.09 sec (870.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6036 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6036 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (25.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 39.3 MB in 2.03 sec (19.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 143.0 MB in 5.99 sec (23.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 152.0 MB in 6.57 sec (23.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]  
   [smoker] Started 

[jira] [Commented] (SOLR-9577) SolrConfig edit operations should not need to reload core

2016-09-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535596#comment-15535596
 ] 

Noble Paul commented on SOLR-9577:
--

It's always better to not hold references of other components  . Ideally, It 
should all be looked up just in time 

> SolrConfig edit operations should not need to reload core
> -
>
> Key: SOLR-9577
> URL: https://issues.apache.org/jira/browse/SOLR-9577
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> Every single change done to solrconfig.xml/configoverlay.json leads to a core 
> reload. This is very bad for performance. 
> Ideally , If I update/add/delete a component only that one component needs to 
> get reloaded.
> How to do this?
> Every component in  Solr should be able to implement an interface 
> {code:java}
> interface Reloadable {
> /** When the configuration of this component is changed the core invokes this 
> method, with the new configuration
> */
> void reload(PluginInfo info);
> /** After a reload() is called on any component in that core , this is invoked
> */
> default void postConfigChange(SolrCore core){}
> }
> {code}
> if the component implements this interface, any change to its configuration 
> will result in a callback to this method.
> if the component does not implement this interface, we should unload and the 
> component and call any close hooks registered from the inform() method . To 
> make this work, we will have to disable registering close hooks from anywhere 
> else. After unloading the component, a new one created with  the new 
> configuration 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9205) Parse schema in LukeResponse

2016-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535589#comment-15535589
 ] 

ASF subversion and git services commented on SOLR-9205:
---

Commit f13b727213e1e843c1a8d0dc2f3930d80f23b11f in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f13b727 ]

SOLR-9205: Parse schema in LukeResponse


> Parse schema in LukeResponse
> 
>
> Key: SOLR-9205
> URL: https://issues.apache.org/jira/browse/SOLR-9205
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Fengtan
>Priority: Minor
> Fix For: 6.3
>
> Attachments: SOLR-9205.patch, SOLR-9205.patch, SOLR-9205.patch
>
>
> LukeRequestHandler (/admin/luke) lists schema flags using two fields named 
> "schema" and "flags".
> For instance on my local machine 
> http://localhost:8983/solr/collection1/admin/luke returns something like this:
> {code:xml}
> 
>   string
>   I-S-OF-l
> 
> {code}
> And http://localhost:8983/solr/collection1/admin/luke?show=schema returns 
> something like this:
> {code:xml}
> 
>   string
>   I-S-OF-l
> 
> {code}
> However, when processing a LukeRequest in SolrJ, only the "flags" field is 
> parsed into a Set of FieldFlag objects. The "schema" field is left as a 
> String, and as a result is hard to process by client applications who do not 
> know how to parse "I-S-OF-l".
> Here is an example that illustrates the problem:
> {code}
> public class MyClass {
>   public static void main(String[] args) throws Exception {
> SolrClient client = new 
> HttpSolrClient("http://localhost:8983/solr/collection1;);
> LukeRequest request = new LukeRequest();
> LukeResponse response = request.process(client);
> for (LukeResponse.FieldInfo field:response.getFieldInfo().values()) {
>   System.out.println(field.getSchema());
>   // field.getSchema() returns "I-S-OF--" (i.e. a String) which 
> is not much meaningful for SolrJ applications
>   // Ideally field.getSchema() should return something like "[INDEXED, 
> STORED, OMIT_NORMS, OMIT_TF]" (i.e. a EnumSet) which is meaningful 
> for SolrJ applications
> }
>   }
> }
> {code}
> It is probably fine to parse both fields the same way in SolrJ since 
> LukeRequestHandler populates them [the same 
> way|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/LukeRequestHandler.java#L288].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9205) Parse schema in LukeResponse

2016-09-30 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved SOLR-9205.
-
   Resolution: Fixed
 Assignee: Alan Woodward
Fix Version/s: 6.3

Thanks Fengtan!

> Parse schema in LukeResponse
> 
>
> Key: SOLR-9205
> URL: https://issues.apache.org/jira/browse/SOLR-9205
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Fengtan
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 6.3
>
> Attachments: SOLR-9205.patch, SOLR-9205.patch, SOLR-9205.patch
>
>
> LukeRequestHandler (/admin/luke) lists schema flags using two fields named 
> "schema" and "flags".
> For instance on my local machine 
> http://localhost:8983/solr/collection1/admin/luke returns something like this:
> {code:xml}
> 
>   string
>   I-S-OF-l
> 
> {code}
> And http://localhost:8983/solr/collection1/admin/luke?show=schema returns 
> something like this:
> {code:xml}
> 
>   string
>   I-S-OF-l
> 
> {code}
> However, when processing a LukeRequest in SolrJ, only the "flags" field is 
> parsed into a Set of FieldFlag objects. The "schema" field is left as a 
> String, and as a result is hard to process by client applications who do not 
> know how to parse "I-S-OF-l".
> Here is an example that illustrates the problem:
> {code}
> public class MyClass {
>   public static void main(String[] args) throws Exception {
> SolrClient client = new 
> HttpSolrClient("http://localhost:8983/solr/collection1;);
> LukeRequest request = new LukeRequest();
> LukeResponse response = request.process(client);
> for (LukeResponse.FieldInfo field:response.getFieldInfo().values()) {
>   System.out.println(field.getSchema());
>   // field.getSchema() returns "I-S-OF--" (i.e. a String) which 
> is not much meaningful for SolrJ applications
>   // Ideally field.getSchema() should return something like "[INDEXED, 
> STORED, OMIT_NORMS, OMIT_TF]" (i.e. a EnumSet) which is meaningful 
> for SolrJ applications
> }
>   }
> }
> {code}
> It is probably fine to parse both fields the same way in SolrJ since 
> LukeRequestHandler populates them [the same 
> way|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/LukeRequestHandler.java#L288].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9205) Parse schema in LukeResponse

2016-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535588#comment-15535588
 ] 

ASF subversion and git services commented on SOLR-9205:
---

Commit 45e2e25233e8a83aaacfb696c2576cd2bf2eb28f in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=45e2e25 ]

SOLR-9205: Parse schema in LukeResponse


> Parse schema in LukeResponse
> 
>
> Key: SOLR-9205
> URL: https://issues.apache.org/jira/browse/SOLR-9205
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Fengtan
>Priority: Minor
> Fix For: 6.3
>
> Attachments: SOLR-9205.patch, SOLR-9205.patch, SOLR-9205.patch
>
>
> LukeRequestHandler (/admin/luke) lists schema flags using two fields named 
> "schema" and "flags".
> For instance on my local machine 
> http://localhost:8983/solr/collection1/admin/luke returns something like this:
> {code:xml}
> 
>   string
>   I-S-OF-l
> 
> {code}
> And http://localhost:8983/solr/collection1/admin/luke?show=schema returns 
> something like this:
> {code:xml}
> 
>   string
>   I-S-OF-l
> 
> {code}
> However, when processing a LukeRequest in SolrJ, only the "flags" field is 
> parsed into a Set of FieldFlag objects. The "schema" field is left as a 
> String, and as a result is hard to process by client applications who do not 
> know how to parse "I-S-OF-l".
> Here is an example that illustrates the problem:
> {code}
> public class MyClass {
>   public static void main(String[] args) throws Exception {
> SolrClient client = new 
> HttpSolrClient("http://localhost:8983/solr/collection1;);
> LukeRequest request = new LukeRequest();
> LukeResponse response = request.process(client);
> for (LukeResponse.FieldInfo field:response.getFieldInfo().values()) {
>   System.out.println(field.getSchema());
>   // field.getSchema() returns "I-S-OF--" (i.e. a String) which 
> is not much meaningful for SolrJ applications
>   // Ideally field.getSchema() should return something like "[INDEXED, 
> STORED, OMIT_NORMS, OMIT_TF]" (i.e. a EnumSet) which is meaningful 
> for SolrJ applications
> }
>   }
> }
> {code}
> It is probably fine to parse both fields the same way in SolrJ since 
> LukeRequestHandler populates them [the same 
> way|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/LukeRequestHandler.java#L288].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9577) SolrConfig edit operations should not need to reload core

2016-09-30 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535584#comment-15535584
 ] 

Shalin Shekhar Mangar commented on SOLR-9577:
-

bq. for example, a searchhandler instance refers to SearchComponents.

Are there other such examples? Should SearchHandler be changed to deal with 
this better e.g. currently, it keeps a list of SearchComponents objects around 
but the actual SearchComponent instance is looked up from the SolrCore.

{code}
/** After a reload() is called on any component in that core , this is invoked
*/
default void postConfigChange(SolrCore core){}
{code}

This is too coarse? If we do this, maybe we can listen to changes on specific 
named components or by plugin class name?

> SolrConfig edit operations should not need to reload core
> -
>
> Key: SOLR-9577
> URL: https://issues.apache.org/jira/browse/SOLR-9577
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> Every single change done to solrconfig.xml/configoverlay.json leads to a core 
> reload. This is very bad for performance. 
> Ideally , If I update/add/delete a component only that one component needs to 
> get reloaded.
> How to do this?
> Every component in  Solr should be able to implement an interface 
> {code:java}
> interface Reloadable {
> /** When the configuration of this component is changed the core invokes this 
> method, with the new configuration
> */
> void reload(PluginInfo info);
> /** After a reload() is called on any component in that core , this is invoked
> */
> default void postConfigChange(SolrCore core){}
> }
> {code}
> if the component implements this interface, any change to its configuration 
> will result in a callback to this method.
> if the component does not implement this interface, we should unload and the 
> component and call any close hooks registered from the inform() method . To 
> make this work, we will have to disable registering close hooks from anywhere 
> else. After unloading the component, a new one created with  the new 
> configuration 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6148 - Unstable!

2016-09-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6148/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([8954809E5D3BC698:E1EBB5B48DA1D474]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:137)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:282)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-9205) Parse schema in LukeResponse

2016-09-30 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-9205:

Attachment: SOLR-9205.patch

Slightly amended patch - I decided not to deprecate the old method, as it's 
still useful, and I folded the test into an existing LukeResponse test.  Will 
commit shortly.

> Parse schema in LukeResponse
> 
>
> Key: SOLR-9205
> URL: https://issues.apache.org/jira/browse/SOLR-9205
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Fengtan
>Priority: Minor
> Attachments: SOLR-9205.patch, SOLR-9205.patch, SOLR-9205.patch
>
>
> LukeRequestHandler (/admin/luke) lists schema flags using two fields named 
> "schema" and "flags".
> For instance on my local machine 
> http://localhost:8983/solr/collection1/admin/luke returns something like this:
> {code:xml}
> 
>   string
>   I-S-OF-l
> 
> {code}
> And http://localhost:8983/solr/collection1/admin/luke?show=schema returns 
> something like this:
> {code:xml}
> 
>   string
>   I-S-OF-l
> 
> {code}
> However, when processing a LukeRequest in SolrJ, only the "flags" field is 
> parsed into a Set of FieldFlag objects. The "schema" field is left as a 
> String, and as a result is hard to process by client applications who do not 
> know how to parse "I-S-OF-l".
> Here is an example that illustrates the problem:
> {code}
> public class MyClass {
>   public static void main(String[] args) throws Exception {
> SolrClient client = new 
> HttpSolrClient("http://localhost:8983/solr/collection1;);
> LukeRequest request = new LukeRequest();
> LukeResponse response = request.process(client);
> for (LukeResponse.FieldInfo field:response.getFieldInfo().values()) {
>   System.out.println(field.getSchema());
>   // field.getSchema() returns "I-S-OF--" (i.e. a String) which 
> is not much meaningful for SolrJ applications
>   // Ideally field.getSchema() should return something like "[INDEXED, 
> STORED, OMIT_NORMS, OMIT_TF]" (i.e. a EnumSet) which is meaningful 
> for SolrJ applications
> }
>   }
> }
> {code}
> It is probably fine to parse both fields the same way in SolrJ since 
> LukeRequestHandler populates them [the same 
> way|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/LukeRequestHandler.java#L288].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7470) Ensure Lucene sources don't swallow root cause exceptions

2016-09-30 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535567#comment-15535567
 ] 

Uwe Schindler commented on LUCENE-7470:
---

+1

Maybe just reuse SuppressForbidden for this :-) Otherwise add a new one!

> Ensure Lucene sources don't swallow root cause exceptions
> -
>
> Key: LUCENE-7470
> URL: https://issues.apache.org/jira/browse/LUCENE-7470
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>
> [~rcmuir] created a nice tool to look at the Java bytecode to determine 
> whether  e.g. a {{finally}} or {{catch}} clause may ignore the original root 
> cause exception, here: 
> https://github.com/rmuir/elasticsearch/tree/catchAnalyzer
> It's a fork of ES but I think maybe we can extract it and use it in Lucene.
> Unlike Python, Java unfortunately does not seem to have safeguards against 
> exceptionally handling code accidentally losing the original exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3714) SolrCloud fails to query documents if the primary key field is of type "lowercase"

2016-09-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-3714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-3714.
-
Resolution: Not A Bug

Closing as not a bug since we document that unique ID must not be analyzed.

In the schema file itself, it is clearly stated that *"Do NOT change the type 
and apply index-time analysis to the  as it will likely make routing 
in SolrCloud and document replacement in general fail."*

> SolrCloud fails to query documents if the primary key field is of type 
> "lowercase"
> --
>
> Key: SOLR-3714
> URL: https://issues.apache.org/jira/browse/SOLR-3714
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.0-ALPHA
>Reporter: Daniel Collins
>
> Running the default SolrCloud tutorial, if you change the id field to type 
> "lowercase" instead of "string", and re-run the tests (indexing all the 
> documents in exampledocs), queries find strange results..
> Querying for *:* and rows=10 returns numFound as 26 docs, setting rows = 20 
> returns 23 docs, and setting rows=50, returns 12 docs!
> Querying for specific ids seems hit and miss as well, the purely lowercase 
> ids "?q=id:belkin" work ok, but anything with uppercase or mixed case ids 
> fails to be found.
> The index is clearly correct as booting the serer without zookeeper (just 
> removing -DzkRun from the command line) returns all the expected docs, but 
> somehow zookeeper is interfering with the queries?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7470) Ensure Lucene sources don't swallow root cause exceptions

2016-09-30 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535508#comment-15535508
 ] 

Dawid Weiss commented on LUCENE-7470:
-

Pretty cool tool, Robert.

> Ensure Lucene sources don't swallow root cause exceptions
> -
>
> Key: LUCENE-7470
> URL: https://issues.apache.org/jira/browse/LUCENE-7470
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>
> [~rcmuir] created a nice tool to look at the Java bytecode to determine 
> whether  e.g. a {{finally}} or {{catch}} clause may ignore the original root 
> cause exception, here: 
> https://github.com/rmuir/elasticsearch/tree/catchAnalyzer
> It's a fork of ES but I think maybe we can extract it and use it in Lucene.
> Unlike Python, Java unfortunately does not seem to have safeguards against 
> exceptionally handling code accidentally losing the original exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9577) SolrConfig edit operations should not need to reload core

2016-09-30 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-9577:
-
Description: 
Every single change done to solrconfig.xml/configoverlay.json leads to a core 
reload. This is very bad for performance. 

Ideally , If I update/add/delete a component only that one component needs to 
get reloaded.

How to do this?

Every component in  Solr should be able to implement an interface 
{code:java}
interface Reloadable {
/** When the configuration of this component is changed the core invokes this 
method, with the new configuration
*/
void reload(PluginInfo info);

/** After a reload() is called on any component in that core , this is invoked
*/
default void postConfigChange(SolrCore core){}
}
{code}

if the component implements this interface, any change to its configuration 
will result in a callback to this method.

if the component does not implement this interface, we should unload and the 
component and call any close hooks registered from the inform() method . To 
make this work, we will have to disable registering close hooks from anywhere 
else. After unloading the component, a new one created with  the new 
configuration 

  was:
Every single change done to solrconfig.xml/configoverlay.json leads to a core 
reload. This is very bad for performance. 

Ideally , If I update/add/delete a component only that one component needs to 
get reloaded.

How to do this?

Every component in  Solr should be able to implement an interface 
{code:java}
interface Reloadable {
void reload(PluginInfo info);
}
{code}

if the component implements this interface, any change to its configuration 
will result in a callback to this method.

if the component does not implement this interface, we should unload and the 
component and call any close hooks registered from the inform() method . To 
make this work, we will have to disable registering close hooks from anywhere 
else. After unloading the component, a new one created with  the new 
configuration 


> SolrConfig edit operations should not need to reload core
> -
>
> Key: SOLR-9577
> URL: https://issues.apache.org/jira/browse/SOLR-9577
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> Every single change done to solrconfig.xml/configoverlay.json leads to a core 
> reload. This is very bad for performance. 
> Ideally , If I update/add/delete a component only that one component needs to 
> get reloaded.
> How to do this?
> Every component in  Solr should be able to implement an interface 
> {code:java}
> interface Reloadable {
> /** When the configuration of this component is changed the core invokes this 
> method, with the new configuration
> */
> void reload(PluginInfo info);
> /** After a reload() is called on any component in that core , this is invoked
> */
> default void postConfigChange(SolrCore core){}
> }
> {code}
> if the component implements this interface, any change to its configuration 
> will result in a callback to this method.
> if the component does not implement this interface, we should unload and the 
> component and call any close hooks registered from the inform() method . To 
> make this work, we will have to disable registering close hooks from anywhere 
> else. After unloading the component, a new one created with  the new 
> configuration 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9577) SolrConfig edit operations should not need to reload core

2016-09-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535425#comment-15535425
 ] 

Noble Paul commented on SOLR-9577:
--

There is another issue with reload. component 1 may rely on component 3 and 4.  
So, component 1 should be notified of changes happened in other components.

for example, a searchhandler instance refers to SearchComponents. If the config 
api  modifies a searchcomponent, every {{Reloadable}} component must be 
notified after the reload is complete. I'm adding another the method to the 
interface

> SolrConfig edit operations should not need to reload core
> -
>
> Key: SOLR-9577
> URL: https://issues.apache.org/jira/browse/SOLR-9577
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> Every single change done to solrconfig.xml/configoverlay.json leads to a core 
> reload. This is very bad for performance. 
> Ideally , If I update/add/delete a component only that one component needs to 
> get reloaded.
> How to do this?
> Every component in  Solr should be able to implement an interface 
> {code:java}
> interface Reloadable {
> void reload(PluginInfo info);
> }
> {code}
> if the component implements this interface, any change to its configuration 
> will result in a callback to this method.
> if the component does not implement this interface, we should unload and the 
> component and call any close hooks registered from the inform() method . To 
> make this work, we will have to disable registering close hooks from anywhere 
> else. After unloading the component, a new one created with  the new 
> configuration 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8699) Components in solrconfig.xml must be reloadable

2016-09-30 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-8699.
--
Resolution: Duplicate

> Components in solrconfig.xml must be reloadable
> ---
>
> Key: SOLR-8699
> URL: https://issues.apache.org/jira/browse/SOLR-8699
> Project: Solr
>  Issue Type: Improvement
>  Components: config-api
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> If we update a component using config API, the entire core is reloaded . This 
> is undesirable in a large cluster. If a component is updated , it should be 
> possible to update just that plugin without reloading the core. To achieve 
> this , we should version the data of of each component and call a 
> {{reload()}} on that component only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9583) When the same exists across multiple collections that are searched with an alias, the document returned in the results list is indeterminate

2016-09-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535383#comment-15535383
 ] 

Noble Paul commented on SOLR-9583:
--

how do you dedupe?

> When the same  exists across multiple collections that are 
> searched with an alias, the document returned in the results list is 
> indeterminate
> 
>
> Key: SOLR-9583
> URL: https://issues.apache.org/jira/browse/SOLR-9583
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>
> Not quite sure whether to call this a bug or improvement...
> Currently if I have two collections C1 and C2 and an alias that points to 
> both _and_ I have a document in both collections with the _same_ , 
> the returned list  sometimes has the doc from C1 and sometimes from C2.
> If I add shards.info=true I see the document found in each collection, but 
> only one in the document list. Which one changes if I re-submit the identical 
> query.
> This seems incorrect, perhaps a side effect of piggy-backing the collection 
> aliasing on searching multiple shards? (Thanks Shalin for that bit of 
> background).
> I can see both use-cases: 
> 1>  aliasing multiple collections validly assumes that s should be 
> unique across them all and only one doc should be returned. Even in this case 
> which doc should be returned should be deterministic.
> 2> these are arbitrary collections without any a-priori relationship and 
> identical s do NOT identify the "same" document so both should be 
> returned.
> So I propose we do two things:
> a> provide a param for the CREATEALIAS command that controls whether docs 
> with the same  from different collections should both be returned. 
> If they both should, there's still the question of in what order.
> b> provide a deterministic way dups from different collections are resolved. 
> What that algorithm is I'm not quite sure. The order the collections were 
> specified in the CREATEALIAS command? Some field in the documents? Other??? 
> What happens if this option is not specified on the CREATEALIAS command?
> Implicit in the above is my assumption that it's perfectly valid to have 
> different aliases in the same cluster behave differently if specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >