[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+136) - Build # 1875 - Unstable!

2016-10-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1875/
Java: 32bit/jdk-9-ea+136 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.rule.RulesTest.doIntegrationTest

Error Message:
Error from server at https://127.0.0.1:35953/solr: Could not identify nodes 
matching the rules [{"cores":"<4"}, {   "replica":"<2",   "node":"*"}, 
{"freedisk":">0"}]  tag values{   "127.0.0.1:35953_solr":{ 
"node":"127.0.0.1:35953_solr", "cores":3, "freedisk":85},   
"127.0.0.1:37539_solr":{ "node":"127.0.0.1:37539_solr", "cores":2, 
"freedisk":85},   "127.0.0.1:42946_solr":{ "node":"127.0.0.1:42946_solr",   
  "cores":2, "freedisk":85},   "127.0.0.1:37158_solr":{ 
"node":"127.0.0.1:37158_solr", "cores":1, "freedisk":85},   
"127.0.0.1:40317_solr":{ "node":"127.0.0.1:40317_solr", "cores":2, 
"freedisk":85}} Initial state for the coll : {   "shard1":{ 
"127.0.0.1:37539_solr":1, "127.0.0.1:37158_solr":1},   "shard2":{ 
"127.0.0.1:40317_solr":1, "127.0.0.1:42946_solr":1}}

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:35953/solr: Could not identify nodes matching 
the rules [{"cores":"<4"}, {
  "replica":"<2",
  "node":"*"}, {"freedisk":">0"}]
 tag values{
  "127.0.0.1:35953_solr":{
"node":"127.0.0.1:35953_solr",
"cores":3,
"freedisk":85},
  "127.0.0.1:37539_solr":{
"node":"127.0.0.1:37539_solr",
"cores":2,
"freedisk":85},
  "127.0.0.1:42946_solr":{
"node":"127.0.0.1:42946_solr",
"cores":2,
"freedisk":85},
  "127.0.0.1:37158_solr":{
"node":"127.0.0.1:37158_solr",
"cores":1,
"freedisk":85},
  "127.0.0.1:40317_solr":{
"node":"127.0.0.1:40317_solr",
"cores":2,
"freedisk":85}}
Initial state for the coll : {
  "shard1":{
"127.0.0.1:37539_solr":1,
"127.0.0.1:37158_solr":1},
  "shard2":{
"127.0.0.1:40317_solr":1,
"127.0.0.1:42946_solr":1}}
at 
__randomizedtesting.SeedInfo.seed([F4A6DB518527F200:11959CD099530002]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1292)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1062)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1004)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.rule.RulesTest.doIntegrationTest(RulesTest.java:81)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[jira] [Commented] (SOLR-1544) Python script to post multiple files to solr using a queue and worker threads

2016-10-04 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547729#comment-15547729
 ] 

Alexandre Rafalovitch commented on SOLR-1544:
-

Solr now has bin/post that is a lot more robust. If that's not enough, the next 
step up is probably custom SolrJ implementation.

I believe this issue can be closed as - no longer - relevant.

> Python script to post multiple files to solr using a queue and worker threads
> -
>
> Key: SOLR-1544
> URL: https://issues.apache.org/jira/browse/SOLR-1544
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.5
> Environment: Python 2.6 and above
>Reporter: Dennis Kubes
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: postqueue.py
>
>
> The is a simple python script that uses a blocking queue and multiple worker 
> threads to post updates (files) to solr.  Works when calling post.sh won't 
> because of too many files or when
> you want to throttle the speed at which you are updating solr.  Tested with 
> runs as high as 30K files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1906) Posibility to store actual geohash values in the geohash field type

2016-10-04 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547725#comment-15547725
 ] 

Alexandre Rafalovitch commented on SOLR-1906:
-

[~dsmiley] Is this issue from 5 versions ago still relevant?

> Posibility to store actual geohash values in the geohash field type
> ---
>
> Key: SOLR-1906
> URL: https://issues.apache.org/jira/browse/SOLR-1906
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.5
> Environment: NA
>Reporter: Stian Berger
>Priority: Trivial
>  Labels: geohash, spatialsearch
>
> Tried to index some data, containing already encoded geohashes, into a 
> geohash field type.
> To my surprise I could not make it work...
> A sneak peak at the source, revealed to me that this field type takes a 
> lat/lng pair as it's value, not a geohash...
> Could this be fixed, so the field type also can take actual geohashes?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-993) VariableResolverImpl addNamespace overwrites entire namespace instead of adding

2016-10-04 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-993.
--
   Resolution: Won't Fix
Fix Version/s: (was: 6.0)
   (was: 4.9)

A very old issue about - then new - issue. The implementation has changed 
several times since. If a similar problem happens again, let's open a new issue 
with updated details.

> VariableResolverImpl addNamespace overwrites entire namespace instead of 
> adding
> ---
>
> Key: SOLR-993
> URL: https://issues.apache.org/jira/browse/SOLR-993
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4
>Reporter: Jared Flatow
>Assignee: Noble Paul
> Attachments: SOLR-993.patch, SOLR-993.patch, SOLR-993b.patch, 
> SOLR-993c.patch, SOLR-993c.patch, SOLR-993c.patch
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> The addNamespace method in VariableResolverImpl does not so much add the 
> namespace as overwrite it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-993) VariableResolverImpl addNamespace overwrites entire namespace instead of adding

2016-10-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547686#comment-15547686
 ] 

Noble Paul commented on SOLR-993:
-

let's close this?

> VariableResolverImpl addNamespace overwrites entire namespace instead of 
> adding
> ---
>
> Key: SOLR-993
> URL: https://issues.apache.org/jira/browse/SOLR-993
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4
>Reporter: Jared Flatow
>Assignee: Noble Paul
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-993.patch, SOLR-993.patch, SOLR-993b.patch, 
> SOLR-993c.patch, SOLR-993c.patch, SOLR-993c.patch
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> The addNamespace method in VariableResolverImpl does not so much add the 
> namespace as overwrite it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3423) HttpShardHandlerFactory does not shutdown its threadpool

2016-10-04 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson closed SOLR-3423.

Resolution: Cannot Reproduce

As pre Greg's comments.

> HttpShardHandlerFactory does not shutdown its threadpool
> 
>
> Key: SOLR-3423
> URL: https://issues.apache.org/jira/browse/SOLR-3423
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 3.6
>Reporter: Greg Bowyer
>Assignee: Greg Bowyer
>  Labels: distributed, shard
> Attachments: 
> SOLR-3423-HttpShardHandlerFactory_ThreadPool_Shutdown_lucene_3x.diff, 
> SOLR-3423-HttpShardHandlerFactory_ThreadPool_Shutdown_lucene_3x.diff
>
>
> The HttpShardHandlerFactory is not getting a chance to shutdown its 
> threadpool, this means that in situations like a core reload / core swap its 
> possible for the handler to leak threads
> (This may also be the case if the webapp is loaded / unloaded in the 
> container)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3423) HttpShardHandlerFactory does not shutdown its threadpool

2016-10-04 Thread Greg Bowyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547654#comment-15547654
 ] 

Greg Bowyer commented on SOLR-3423:
---

Safe to close



> HttpShardHandlerFactory does not shutdown its threadpool
> 
>
> Key: SOLR-3423
> URL: https://issues.apache.org/jira/browse/SOLR-3423
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 3.6
>Reporter: Greg Bowyer
>Assignee: Greg Bowyer
>  Labels: distributed, shard
> Fix For: 3.6.3
>
> Attachments: 
> SOLR-3423-HttpShardHandlerFactory_ThreadPool_Shutdown_lucene_3x.diff, 
> SOLR-3423-HttpShardHandlerFactory_ThreadPool_Shutdown_lucene_3x.diff
>
>
> The HttpShardHandlerFactory is not getting a chance to shutdown its 
> threadpool, this means that in situations like a core reload / core swap its 
> possible for the handler to leak threads
> (This may also be the case if the webapp is loaded / unloaded in the 
> container)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-4762) Deploying on weblogic: java.lang.NoSuchMethodError: replaceEach

2016-10-04 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-4762.
---
   Resolution: Won't Fix
Fix Version/s: (was: 6.0)
   (was: 4.9)

We no longer support deploying war files to Weblogic or other containers.

> Deploying on weblogic: java.lang.NoSuchMethodError: replaceEach
> ---
>
> Key: SOLR-4762
> URL: https://issues.apache.org/jira/browse/SOLR-4762
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.2
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
> Attachments: SOLR-4762.patch
>
>
> When a user tried to deploy on weblogic 10.3, they got this exception:
> {noformat}
> Error 500--Internal Server Error
> java.lang.NoSuchMethodError: replaceEach
> at 
> org.apache.solr.servlet.LoadAdminUiServlet.doGet(LoadAdminUiServlet.java:70)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:821)
> at 
> weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
> at 
> weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
> at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
> at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:27)
> at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:43)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:142)
> at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:43)
> at 
> weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
> at 
> weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
> at weblogic.security.service.SecurityManager.runAs(Unknown Source)
> at 
> weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
> at 
> weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
> at 
> weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
> at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
> at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
> {noformat}
> The solution to this problem appears to be adding the following to 
> weblogic.xml in WEB-INF:
> {noformat}
> 
>   true
> 
> {noformat}
> Since Solr's WEB-INF directory already contains this file and it already has 
> the container-descriptor tag, I'm hoping this is a benign change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-993) VariableResolverImpl addNamespace overwrites entire namespace instead of adding

2016-10-04 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547570#comment-15547570
 ] 

Alexandre Rafalovitch commented on SOLR-993:


This seems to be a very, very old discussion that has no next action.

Is there something still pending from this or can this be closed?

> VariableResolverImpl addNamespace overwrites entire namespace instead of 
> adding
> ---
>
> Key: SOLR-993
> URL: https://issues.apache.org/jira/browse/SOLR-993
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
>Affects Versions: 1.4
>Reporter: Jared Flatow
>Assignee: Noble Paul
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-993.patch, SOLR-993.patch, SOLR-993b.patch, 
> SOLR-993c.patch, SOLR-993c.patch, SOLR-993c.patch
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> The addNamespace method in VariableResolverImpl does not so much add the 
> namespace as overwrite it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3423) HttpShardHandlerFactory does not shutdown its threadpool

2016-10-04 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547552#comment-15547552
 ] 

Alexandre Rafalovitch commented on SOLR-3423:
-

This seem to be a proposed fix for 4-year-old version of product that was 
already fixed for later versions.

Safe to close?

> HttpShardHandlerFactory does not shutdown its threadpool
> 
>
> Key: SOLR-3423
> URL: https://issues.apache.org/jira/browse/SOLR-3423
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 3.6
>Reporter: Greg Bowyer
>Assignee: Greg Bowyer
>  Labels: distributed, shard
> Fix For: 3.6.3
>
> Attachments: 
> SOLR-3423-HttpShardHandlerFactory_ThreadPool_Shutdown_lucene_3x.diff, 
> SOLR-3423-HttpShardHandlerFactory_ThreadPool_Shutdown_lucene_3x.diff
>
>
> The HttpShardHandlerFactory is not getting a chance to shutdown its 
> threadpool, this means that in situations like a core reload / core swap its 
> possible for the handler to leak threads
> (This may also be the case if the webapp is loaded / unloaded in the 
> container)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2803) NPE in FacetComponent

2016-10-04 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-2803.
---
   Resolution: Cannot Reproduce
Fix Version/s: (was: 3.4)

Ancient issue that could not be reproduced at a time. If a similar problem 
happens with recent version of Solr, a new issue can be created with updated 
details/stack-trace.

> NPE in FacetComponent
> -
>
> Key: SOLR-2803
> URL: https://issues.apache.org/jira/browse/SOLR-2803
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 3.3, 3.4
>Reporter: Fi
>  Labels: patch
> Attachments: FacetComponent.patch
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> On a call to my multicore setup (with 'activity' being one of my cores):
> /solr/activity/select/?q=*:*=bucket:1000=dma:%22Albuquerque%22=2.2=0=0=on=time=2011-02-01T04:00:00Z=2011-06-11T00:00:00Z=%2B1HOUR=json=grid
> I get a NPE in the FacetComponent patch.
> SEVERE: java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.FacetComponent.countFacets(FacetComponent.java:347)
> at 
> org.apache.solr.handler.component.FacetComponent.handleResponses(FacetComponent.java:257)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:289)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1368)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:224)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
> at 
> org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:462)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)
> at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:851)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:405)
> at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:278)
> at 
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:515)
> at 
> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:300)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> With a 500 error response.
> Here is the qt=grid RequestHandler definition in the solrconfig.xml
>   
> 
>explicit
>10
> name="shards">core-dev-01.example.com:8080/jiwire/activity,core-dev-02.example.com:8080/jiwire/activity,core-dev-03.example.com:8080/jiwire/activity,core-dev-01.example.com:8080/jiwire/activity,core-dev-01.example.com:8080/jiwire/activity2,core-dev-02.example.com:8080/jiwire/activity2,core-dev-03.example.com:8080/jiwire/activity2,core-dev-01.example.com:8080/jiwire/activity2,core-dev-01.example.com:8080/jiwire/activity3,core-dev-02.example.com:8080/jiwire/activity3,core-dev-03.example.com:8080/jiwire/activity3,core-dev-01.example.com:8080/jiwire/activity3,core-dev-01.example.com:8080/jiwire/activity4,core-dev-02.example.com:8080/jiwire/activity4,core-dev-03.example.com:8080/jiwire/activity4,core-dev-01.example.com:8080/jiwire/activity4
> 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9599) Facet performance regression using fieldcache and new DV iterator API

2016-10-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547253#comment-15547253
 ] 

Yonik Seeley edited comment on SOLR-9599 at 10/5/16 4:02 AM:
-

A quick test of the same fields in the same index shows hits to sorting and 
function queries as well.

With a quick manual test, this was 51% faster previously (before LUCENE-7407) 
for fieldcache fields, and 37% faster for docvalue fields:
{code}
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW=id=s10_s%20desc,%20s100_s%20desc,%20s1000_s%20desc
{code}

And this was 78% faster for fieldcache, and 29% faster for docvalues:
{code}
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW%20{!func%20v=$vv}=id=add(exists(s10_s),exists(s100_s),exists(s1000_s))
{code}

Integer field function queries were 75% faster, and docvalues were 50% faster:
{code}
http://localhost:8983/solr/collection1/query?q=mydate_dt:NOW%20{!func%20v=$vv}=id=add(s10_i,s100_i,s1000_i,s1_i)
{code}




was (Author: ysee...@gmail.com):
A quick test of the same fields in the same index shows hits to sorting and 
function queries as well.

With a quick manual test, this was 51% slower for fieldcache fields, and 37% 
slower for docvalue fields:
{code}
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW=id=s10_s%20desc,%20s100_s%20desc,%20s1000_s%20desc
{code}

And this was 78% slower for fieldcache, and 29% slower for docvalues:
{code}
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW%20{!func%20v=$vv}=id=add(exists(s10_s),exists(s100_s),exists(s1000_s))
{code}

Integer field function queries were 75% slower, and docvalues were 50% slower:
{code}
http://localhost:8983/solr/collection1/query?q=mydate_dt:NOW%20{!func%20v=$vv}=id=add(s10_i,s100_i,s1000_i,s1_i)
{code}

> Facet performance regression using fieldcache and new DV iterator API
> -
>
> Key: SOLR-9599
> URL: https://issues.apache.org/jira/browse/SOLR-9599
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Yonik Seeley
> Fix For: master (7.0)
>
>
> I did a quick performance comparison of faceting indexed fields (i.e. 
> docvalues are not stored) using method=dv before and after the new docvalues 
> iterator went in (LUCENE-7407).
> 5M document index, 21 segments, single valued string fields w/ no missing 
> values.
> || field cardinality || new_time / old_time ||
> |10|2.01|
> |1000|2.02|
> |1|1.85|
> |10|1.56|
> |100|1.31|
> So unfortunately, often twice as slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 466 - Still Unstable

2016-10-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/466/

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxDocs

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([F658D47AD6052C29:4FD902A5FAEF28A3]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:813)
at 
org.apache.solr.update.AutoCommitTest.testMaxDocs(AutoCommitTest.java:225)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:14=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:806)
... 40 more




Build Log:
[...truncated 11546 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   [junit4]   2> Creating dataDir: 

[jira] [Comment Edited] (SOLR-9599) Facet performance regression using fieldcache and new DV iterator API

2016-10-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547253#comment-15547253
 ] 

Yonik Seeley edited comment on SOLR-9599 at 10/5/16 3:59 AM:
-

A quick test of the same fields in the same index shows hits to sorting and 
function queries as well.

With a quick manual test, this was 51% slower for fieldcache fields, and 37% 
slower for docvalue fields:
{code}
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW=id=s10_s%20desc,%20s100_s%20desc,%20s1000_s%20desc
{code}

And this was 78% slower for fieldcache, and 29% slower for docvalues:
{code}
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW%20{!func%20v=$vv}=id=add(exists(s10_s),exists(s100_s),exists(s1000_s))
{code}

Integer field function queries were 75% slower, and docvalues were 50% slower:
{code}
http://localhost:8983/solr/collection1/query?q=mydate_dt:NOW%20{!func%20v=$vv}=id=add(s10_i,s100_i,s1000_i,s1_i)
{code}


was (Author: ysee...@gmail.com):
A quick test of the same fields in the same index shows hits to sorting and 
function queries as well.

With a quick manual test, this was 51% slower:
{code}
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW=id=s10_s%20desc,%20s100_s%20desc,%20s1000_s%20desc
{code}

And this was 78% slower:
{code}
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW%20{!func%20v=$vv}=id=add(exists(s10_s),exists(s100_s),exists(s1000_s))
{code}

> Facet performance regression using fieldcache and new DV iterator API
> -
>
> Key: SOLR-9599
> URL: https://issues.apache.org/jira/browse/SOLR-9599
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Yonik Seeley
> Fix For: master (7.0)
>
>
> I did a quick performance comparison of faceting indexed fields (i.e. 
> docvalues are not stored) using method=dv before and after the new docvalues 
> iterator went in (LUCENE-7407).
> 5M document index, 21 segments, single valued string fields w/ no missing 
> values.
> || field cardinality || new_time / old_time ||
> |10|2.01|
> |1000|2.02|
> |1|1.85|
> |10|1.56|
> |100|1.31|
> So unfortunately, often twice as slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9599) Facet performance regression using fieldcache and new DV iterator API

2016-10-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547522#comment-15547522
 ] 

Yonik Seeley commented on SOLR-9599:


I tried some docValue fields this time instead of the fieldcache:

|| field cardinality || new_time / old_time ||
|10|1.29|
|1000|1.23|
|1|1.24|
|10|1.34|
|100|1.09|



> Facet performance regression using fieldcache and new DV iterator API
> -
>
> Key: SOLR-9599
> URL: https://issues.apache.org/jira/browse/SOLR-9599
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Yonik Seeley
> Fix For: master (7.0)
>
>
> I did a quick performance comparison of faceting indexed fields (i.e. 
> docvalues are not stored) using method=dv before and after the new docvalues 
> iterator went in (LUCENE-7407).
> 5M document index, 21 segments, single valued string fields w/ no missing 
> values.
> || field cardinality || new_time / old_time ||
> |10|2.01|
> |1000|2.02|
> |1|1.85|
> |10|1.56|
> |100|1.31|
> So unfortunately, often twice as slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7438) UnifiedHighlighter

2016-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547461#comment-15547461
 ] 

ASF subversion and git services commented on LUCENE-7438:
-

Commit 4b6794368df373df1f68ccf27f7556914efeb95e in lucene-solr's branch 
refs/heads/branch_6x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4b67943 ]

LUCENE-7438: New UnifiedHighlighter

(cherry picked from commit 722e827)


> UnifiedHighlighter
> --
>
> Key: LUCENE-7438
> URL: https://issues.apache.org/jira/browse/LUCENE-7438
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Affects Versions: 6.2
>Reporter: Timothy M. Rodriguez
>Assignee: David Smiley
> Attachments: LUCENE-7438.patch, LUCENE_7438_UH_benchmark.patch, 
> LUCENE_7438_UH_small_changes.patch
>
>
> The UnifiedHighlighter is an evolution of the PostingsHighlighter that is 
> able to highlight using offsets in either postings, term vectors, or from 
> analysis (a TokenStream). Lucene’s existing highlighters are mostly 
> demarcated along offset source lines, whereas here it is unified -- hence 
> this proposed name. In this highlighter, the offset source strategy is 
> separated from the core highlighting functionalty. The UnifiedHighlighter 
> further improves on the PostingsHighlighter’s design by supporting accurate 
> phrase highlighting using an approach similar to the standard highlighter’s 
> WeightedSpanTermExtractor. The next major improvement is a hybrid offset 
> source strategythat utilizes postings and “light” term vectors (i.e. just the 
> terms) for highlighting multi-term queries (wildcards) without resorting to 
> analysis. Phrase highlighting and wildcard highlighting can both be disabled 
> if you’d rather highlight a little faster albeit not as accurately reflecting 
> the query.
> We’ve benchmarked an earlier version of this highlighter comparing it to the 
> other highlighters and the results were exciting! It’s tempting to share 
> those results but it’s definitely due for another benchmark, so we’ll work on 
> that. Performance was the main motivator for creating the UnifiedHighlighter, 
> as the standard Highlighter (the only one meeting Bloomberg Law’s accuracy 
> requirements) wasn’t fast enough, even with term vectors along with several 
> improvements we contributed back, and even after we forked it to highlight in 
> multiple threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9592) decorateDocValues cause serious performance issue because of using slowCompositeReaderWrapper

2016-10-04 Thread Takahiro Ishikawa (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547449#comment-15547449
 ] 

Takahiro Ishikawa commented on SOLR-9592:
-

bq. I supposed weakly agree
Thank you. I'll keep it(getLeafReader) renamed :)

{quote}
Right. Sometimes you need both a global view and a segment view to do it right. 
See something like FacetFieldProcessorByArrayDV, where we use both top level 
and segment level.
{quote}
Yes, I got what you are saying. If I have a time, I'll go into detail for 
MultiDocValues problem.

Now all my work is done. Is there any other comments?

> decorateDocValues cause serious performance issue because of using 
> slowCompositeReaderWrapper
> -
>
> Key: SOLR-9592
> URL: https://issues.apache.org/jira/browse/SOLR-9592
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers, search
>Affects Versions: 6.0, 6.1, 6.2
>Reporter: Takahiro Ishikawa
>  Labels: performance
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9592.patch, SOLR-9592.patch, SOLR-9592_6x.patch
>
>
> I have serious performance issue using AtomicUpdate (and RealtimeGet) with 
> non stored docValues.
> Because decorateDocValues try to merge each leafLeader on the fly via 
> slowCompositeReaderWrapper and it’s extremely slow (> 10sec).
> Simply access docValues via nonCompositeReader could resolve this 
> issue.(patch) 
> AtomicUpdate performance(or RealtimeGet performance)
> * Environment
> ** solr version : 6.0.0
> ** schema ~ 100 fields(90% docValues, some of those are multi valued)
> ** index : 5,000,000
> * Performance
> ** original :  > 10sec per query
> ** patched : at least 100msec per query
> This patch will also enhance search performance, because DocStreamer also 
> fetch docValues via decorateDocValues.
> Though it depends on each environment, I could take 20% search performance 
> gain.
> (This patch originally written for solr 6.0.0, and now rewritten for master)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9600) RulesTest.doIntegrationTest() failures

2016-10-04 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-9600:


Assignee: Noble Paul

> RulesTest.doIntegrationTest() failures
> --
>
> Key: SOLR-9600
> URL: https://issues.apache.org/jira/browse/SOLR-9600
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Noble Paul
>
> My Jenkins has seen this test fail about 8 times today, mostly on branch_6x 
> but also on master, e.g. 
> [http://jenkins.sarowe.net/job/Lucene-Solr-tests-6.x/3049/], 
> [http://jenkins.sarowe.net/job/Lucene-Solr-tests-master/8833/].  This is new 
> - previous failure on my Jenkins was from August.  The failures aren't 100% 
> reproducible.
> From Policeman Jenkins 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6158]:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=RulesTest 
> -Dtests.method=doIntegrationTest -Dtests.seed=D12AC7FA27544B42 
> -Dtests.slow=true -Dtests.locale=de-DE -Dtests.timezone=America/New_York 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
>[junit4] ERROR   14.1s J0 | RulesTest.doIntegrationTest <<<
>[junit4]> Throwable #1: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:51451/solr: Could not identify nodes matching 
> the rules [{"cores":"<4"}, {
>[junit4]>   "replica":"<2",
>[junit4]>   "node":"*"}, {"freedisk":">1"}]
>[junit4]>  tag values{
>[junit4]>   "127.0.0.1:51451_solr":{
>[junit4]> "node":"127.0.0.1:51451_solr",
>[junit4]> "cores":3,
>[junit4]> "freedisk":31},
>[junit4]>   "127.0.0.1:51444_solr":{
>[junit4]> "node":"127.0.0.1:51444_solr",
>[junit4]> "cores":1,
>[junit4]> "freedisk":31},
>[junit4]>   "127.0.0.1:51461_solr":{
>[junit4]> "node":"127.0.0.1:51461_solr",
>[junit4]> "cores":2,
>[junit4]> "freedisk":31},
>[junit4]>   "127.0.0.1:51441_solr":{
>[junit4]> "node":"127.0.0.1:51441_solr",
>[junit4]> "cores":2,
>[junit4]> "freedisk":31},
>[junit4]>   "127.0.0.1:51454_solr":{
>[junit4]> "node":"127.0.0.1:51454_solr",
>[junit4]> "cores":2,
>[junit4]> "freedisk":31}}
>[junit4]> Initial state for the coll : {
>[junit4]>   "shard1":{
>[junit4]> "127.0.0.1:51454_solr":1,
>[junit4]> "127.0.0.1:51444_solr":1},
>[junit4]>   "shard2":{
>[junit4]> "127.0.0.1:51461_solr":1,
>[junit4]> "127.0.0.1:51441_solr":1}}
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D12AC7FA27544B42:3419807B3B20B940]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1288)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1058)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1000)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
>[junit4]>  at 
> org.apache.solr.cloud.rule.RulesTest.doIntegrationTest(RulesTest.java:81)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Beasting current master with Miller's beasting script resulted in 6 failures 
> out of 50 iterations.
> I'm running {{git bisect}} in combination with beasting to see if I can find 
> the commit where this started happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1223) Query Filter fq with OR operator

2016-10-04 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547435#comment-15547435
 ] 

Alexandre Rafalovitch commented on SOLR-1223:
-

This is now implemented as a query syntax in SOLR-7219

Is there anything else left to do here?

> Query Filter fq with OR operator
> 
>
> Key: SOLR-1223
> URL: https://issues.apache.org/jira/browse/SOLR-1223
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Brian Pearson
>
> See this 
> [issue|http://lucene.472066.n3.nabble.com/Query-Filter-fq-with-OR-operator-td499172.html]
>  for some background. Today, all of the Query filters specified with the fq 
> parameter are AND'd together.
> This issue is about allowing a set of filters to be OR'd together (in 
> addition to having another set of filters that are AND'd). The OR'd filters 
> would of course be applied before any scoring is done.
> The advantage of this feature is that you will be able to break up complex 
> filters into simple, more cacheable filters, which should improve 
> performance. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9524) SolrIndexSearcher.getIndexFingerprint uses dubious sunchronization

2016-10-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547393#comment-15547393
 ] 

Yonik Seeley commented on SOLR-9524:


I wonder if we could use the same type of logic for 
UnInvertedField.getUnInvertedField() perhaps by adding an additional method 
to SolrCache that takes a creator?

> SolrIndexSearcher.getIndexFingerprint uses dubious sunchronization
> --
>
> Key: SOLR-9524
> URL: https://issues.apache.org/jira/browse/SOLR-9524
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Mike Drob
>Assignee: Noble Paul
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9524.patch
>
>
> In SOLR-9310 we added more code that does some fingerprint caching in 
> SolrIndexSearcher. However, the synchronization looks like it could be made 
> more efficient and may have issues with correctness.
> https://github.com/apache/lucene-solr/blob/branch_6x/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#L2371-L2385
> Some of the issues:
> * Double checked locking needs use of volatile variables to ensure proper 
> memory semantics.
> * sync on a ConcurrentHashMap is usually a code smell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9602) Support Bucket Filters in Facet Functions

2016-10-04 Thread jefferyyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jefferyyuan closed SOLR-9602.
-
Resolution: Duplicate

Yonik Seeley already created https://issues.apache.org/jira/browse/SOLR-9603.  

> Support Bucket Filters in Facet Functions
> -
>
> Key: SOLR-9602
> URL: https://issues.apache.org/jira/browse/SOLR-9602
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting
>Reporter: jefferyyuan
>  Labels: facet, faceted-search, faceting, function
> Fix For: 5.5.4, 6.3, 6.x, 6.2.2
>
>
> Original link: 
> http://lucene.472066.n3.nabble.com/Facet-Stats-MinCount-How-to-use-mincount-filter-when-use-facet-stats-td4299367.html
> we need bucket filters in general (beyond mincount).  - Yonik Seeley
> We store some events data such as accountId, startTime, endTime, timeSpent 
> and some other searchable fields.
> We want to get all acountIds that spend more than xhours between startTime 
> and endTime and some other criteria which are not important here.
> We use solr facet function like below.
> it's very powerful. The only missing part is that it doesn't minValue and 
> maxValue filter. 
> http://localhost:8983/solr/events/select?q=*:*={ 
>categories:{ 
>  type : terms, 
>  field : accountId, 
>  numBuckets: true, 
>  facet:{ 
>sum : "sum(timeSpent)" 
>// it would be great if we support minValue, maxValue to do filter 
> here 
>  } 
>} 
>  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9603) Facet bucket filters

2016-10-04 Thread jefferyyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547369#comment-15547369
 ] 

jefferyyuan commented on SOLR-9603:
---

Original link: 
http://lucene.472066.n3.nabble.com/Facet-Stats-MinCount-How-to-use-mincount-filter-when-use-facet-stats-td4299367.html
https://issues.apache.org/jira/browse/SOLR-9602

> Facet bucket filters
> 
>
> Key: SOLR-9603
> URL: https://issues.apache.org/jira/browse/SOLR-9603
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>
> "filter" may be a bit of an overloaded term, but it would be nice to be able 
> to filter facet buckets by additional things, like the metrics that are 
> calculated per bucket.
> This is like the HAVING clause in SQL.
> Example of a facet that would group by author, find the average review rating 
> for that author, and filter out authors (buckets) with less than a 3.5 
> average.
>  
> {code}
> reviews : {
>   type : terms,
>   field: author,
>   sort: "x desc",
>   having: "x >= 3.5",
>   facet : {
> x : avg(rating)
>   }
> }
> {code}
>  
> This functionality would also be useful for "pushing down" more calculations 
> to the endpoints for streaming expressions / SQL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-4809) OpenOffice document body is not indexed by SolrCell

2016-10-04 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-4809.
---
Resolution: Implemented

This was a Tika issue, not Solr. And it was already implemented in Tika 1.5.

> OpenOffice document body is not indexed by SolrCell
> ---
>
> Key: SOLR-4809
> URL: https://issues.apache.org/jira/browse/SOLR-4809
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 3.6.1, 4.3
>Reporter: Jack Krupansky
> Attachments: HelloWorld.docx, HelloWorld.odp, HelloWorld.odt, 
> HelloWorld.txt, SOLR-4809.patch
>
>
> As reported on the solr user mailing list, SolrCell is not indexing document 
> body content for OpenOffice documents.
> I tested with Apache Open Office 3.4.1 on Solr 4.3 and 3.6.1, for both 
> OpenWriter (.ODT) and Impress (.ODS).
> The extractOnly option does return the document body text, but Solr does not 
> index the document body text. In my test cases (.ODS and .ODT), all I see for 
> the "content" attribute in Solr are a few spaces.
> Using the example schema, I indexed HelloWorld.odt using:
> {code}
>  curl 
> "http://localhost:8983/solr/update/extract?literal.id=doc-1=attr_=true;
>  -F "myfile=@HelloWorld.odt"
> {code}
> It queries as:
> {code}
> 
> 
> 
>   0
>   2
>   
> true
> id:doc-1
>   
> 
> 
>   
> doc-1
> 
>   0
> 
> 
>   1
> 
> 
>   myfile
> 
> 
>   2013-05-10T17:15:40.99
> 
> 
>   Hello, World
> 
> Hello World - subject
> 
>   2013-05-10T17:11:58.88
> 
> 
>   2013-05-10T17:15:40.99
> 
> 
>   This is a test of SolrCell using OpenOffice 3.4.1 - 
> OpenWriter.
> 
> 
>   0
> 
> 
>   10
> 
> 
>   PT3M44S
> 
> 
>   4
> 
> 
>   2013-05-10T17:11:58.88
> 
> 
>   Hello World SolrCell Test - title
> 
> 
>   0
> 
> 
>   application/octet-stream
> 
> 
>   0
> 
> This is a test of SolrCell using OpenOffice 3.4.1 
> - OpenWriter.
> 
>   8960
> 
> 
>   0
> 
> 
>   Hello World - subject
> 
> 
>   HelloWorld.odt
> 
> 
>   OpenOffice.org/3.4.1$Win32 
> OpenOffice.org_project/341m1$Build-9593
> 
> Hello, World
> 
>   2013-05-10T17:15:40.99
> 
> 
>   4
> 
> 
>   Hello World SolrCell Test - title
> 
> 
>   2013-05-10T17:15:40.99
> 
> 
>   2013-05-10T17:11:58.88
> 
> 
>   1
> 
> 
>   60
> 
> 2013-05-10T17:15:40Z
> 
>   0
> 
> 
>   10
> 
> 
>   0
> 
> 
>   2013-05-10T17:15:40.99
> 
> 
>   0
> 
> 
>   1
> 
> 
>   0
> 
> 
>   4
> 
> 
>   60
> 
> 
>   1
> 
> 
>   10
> 
> 
>   1
> 
> 
>   application/vnd.oasis.opendocument.text
> 
> 
>   60
> 
> 
> 
> 
> 1434688567598120960
> 
> 
> {code}
> Command to extract as text:
> {code}
> curl 
> "http://localhost:8983/solr/update/extract?literal.id=doc-1=true=true=text=true;
>  -F "myfile=@HelloWorld.odt"
> {code}
> The response:
> {code}
> 
> 
> 
>   0
>   124
> 
> 
> Hello World, from OpenOffice!
> Third line.
> Fourth line.
> The end.
> 
> 
>   
> 0
>   
>   
> 1
>   
>   
> myfile
>   
>   
> 2013-05-10T17:15:40.99
>   
>   
> Hello, World
>   
>   
> Hello World - subject
>   
>   
> 2013-05-10T17:11:58.88
>   
>   
> 2013-05-10T17:15:40.99
>   
>   
> This is a test of SolrCell using OpenOffice 3.4.1 - OpenWriter.
>   
>   
> 0
>   
>   
> 10
>   
>   
> PT3M44S
>   
>   
> 4
>   
>   
> 2013-05-10T17:11:58.88
>   
>   
> Hello World SolrCell Test - title
>   
>   
> 0
>   
>   
> application/octet-stream
>   
>   
> 0
>   
>   
> This is a test of SolrCell using OpenOffice 3.4.1 - OpenWriter.
>   
>   
> 8960
>   
>   
> 0
>   
>   
> Hello World - subject
>   
>   
> HelloWorld.odt
>   
>   
> OpenOffice.org/3.4.1$Win32 
> OpenOffice.org_project/341m1$Build-9593
>   
>   
> Hello, World
>   
>   
> 2013-05-10T17:15:40.99
>   
>   
> 4
>   
>   
> Hello World SolrCell Test - title
>   
>   
> 2013-05-10T17:15:40.99
>   
>   
> 2013-05-10T17:11:58.88
>   
>   
> 1
>   
>   
> 60
>   
>   
> 2013-05-10T17:15:40.99
>   
>   
> 0
>   
>   
> 10
>   
>   
> 0
>   
>   
> 2013-05-10T17:15:40.99
>   
>   
> 0
>   
>   
> 1
>   
>   
> 0
>   
>   
> 4
>   
>   
> 60
>   
>   
> 1
>   
>   
> 10
>   
>   
> 1
>   
>   
> 

[jira] [Commented] (SOLR-8709) Add checksum to the TopicStream to ensure delivery of all documents within a Topic

2016-10-04 Thread Shishir Choudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547325#comment-15547325
 ] 

Shishir Choudhary commented on SOLR-8709:
-

This feature is very useful. Is it going to be fixed soon ? 

> Add checksum to the TopicStream to ensure delivery of all documents within a 
> Topic
> --
>
> Key: SOLR-8709
> URL: https://issues.apache.org/jira/browse/SOLR-8709
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>
> Currently the TopicStream can miss documents if version numbers are received 
> out-of-order. The TopicStream sorts on version number so it will only miss 
> out-of-order versions that span commit boundaries. *Stress testing was not 
> able to create a missed document scenario* (see comment below), but code 
> review points to the possibility of this happening.
> In order to resolve this issue we can adopt an approach that keeps a checksum 
> of the version numbers for a sliding time window. This checksum can be 
> checked each run and if the checksums don't match the documents from the time 
> window can be resent. As long as the time window is longer then the 
> softCommit interval, this will guarantee delivery of all documents for the 
> Topic. This won't guarantee *one time delivery* but should be provide a 
> reasonable expectation of one time delivery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9579) Reuse lucene FieldType in createField flow during ingestion

2016-10-04 Thread John Call (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Call updated SOLR-9579:

Attachment: SOLR-9579.patch

Made SchemaField implement IndexableFieldType and updated lucene.document.Field 
to be dependent on IndexableFieldType instead of FieldType. Not sure if I 
should renaming some of the existing methods in SchemaField to conform to 
IndexableFieldType instead of having duplicate methods as this will touch a 
large number of files. (e.g. tokenized/IsTokenized, 
storeTermVector/storeTermVectors etc)

> Reuse lucene FieldType in createField flow during ingestion
> ---
>
> Key: SOLR-9579
> URL: https://issues.apache.org/jira/browse/SOLR-9579
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Affects Versions: 6.x, master (7.0)
> Environment: This has been primarily tested on Windows 8 and Windows 
> Server 2012 R2
>Reporter: John Call
>Priority: Minor
>  Labels: gc, memory, reuse
> Fix For: 6.x, master (7.0)
>
> Attachments: SOLR-9579.patch, SOLR-9579.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> During ingestion createField in FieldType is being called for each field on 
> each document. For the subclasses of FieldType without their own 
> implementation of createField the lucene version of FieldType is created to 
> be stored along with the value. However the lucene FieldType object is 
> identical when created from the same SchemaField. In testing ingestion of one 
> million rows with 22 field each we were creating 22 million lucene FieldType 
> objects when only 22 are needed. Solr should lazily initialize a lucene 
> FieldType for each SchemaField and reuse them for future ingestion. Not only 
> does this relieve memory usage but also relieves significant pressure on the 
> gc.
> There are also subclasses of Solr FieldType which create separate Lucene 
> FieldType for stored fields instead of reusing the static in StoredField.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+136) - Build # 17972 - Still unstable!

2016-10-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17972/
Java: 32bit/jdk-9-ea+136 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([590AE1AC0694E8DA:A04772033AE1A550]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit(ShardSplitTest.java:278)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Created] (SOLR-9603) Facet bucket filters

2016-10-04 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-9603:
--

 Summary: Facet bucket filters
 Key: SOLR-9603
 URL: https://issues.apache.org/jira/browse/SOLR-9603
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Reporter: Yonik Seeley


"filter" may be a bit of an overloaded term, but it would be nice to be able to 
filter facet buckets by additional things, like the metrics that are calculated 
per bucket.

This is like the HAVING clause in SQL.

Example of a facet that would group by author, find the average review rating 
for that author, and filter out authors (buckets) with less than a 3.5 average.
 
{code}
reviews : {
  type : terms,
  field: author,
  sort: "x desc",
  having: "x >= 3.5",
  facet : {
x : avg(rating)
  }
}
{code}
 
This functionality would also be useful for "pushing down" more calculations to 
the endpoints for streaming expressions / SQL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9602) Support Bucket Filters in Facet Functions

2016-10-04 Thread jefferyyuan (JIRA)
jefferyyuan created SOLR-9602:
-

 Summary: Support Bucket Filters in Facet Functions
 Key: SOLR-9602
 URL: https://issues.apache.org/jira/browse/SOLR-9602
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module, faceting
Reporter: jefferyyuan
 Fix For: 5.5.4, 6.3, 6.x, 6.2.2


Original link: 
http://lucene.472066.n3.nabble.com/Facet-Stats-MinCount-How-to-use-mincount-filter-when-use-facet-stats-td4299367.html

we need bucket filters in general (beyond mincount).  - Yonik Seeley

We store some events data such as accountId, startTime, endTime, timeSpent and 
some other searchable fields.

We want to get all acountIds that spend more than xhours between startTime and 
endTime and some other criteria which are not important here.

We use solr facet function like below.
it's very powerful. The only missing part is that it doesn't minValue and 
maxValue filter. 
http://localhost:8983/solr/events/select?q=*:*={ 
   categories:{ 
 type : terms, 
 field : accountId, 
 numBuckets: true, 
 facet:{ 
   sum : "sum(timeSpent)" 
   // it would be great if we support minValue, maxValue to do filter here 
 } 
   } 
 }




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9601) DIH: Radicially simplify Tika example to only show relevant configuration

2016-10-04 Thread Alexandre Rafalovitch (JIRA)
Alexandre Rafalovitch created SOLR-9601:
---

 Summary: DIH: Radicially simplify Tika example to only show 
relevant configuration
 Key: SOLR-9601
 URL: https://issues.apache.org/jira/browse/SOLR-9601
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: contrib - DataImportHandler, contrib - Solr Cell (Tika 
extraction)
Affects Versions: 6.x, master (7.0)
Reporter: Alexandre Rafalovitch
Assignee: Alexandre Rafalovitch


Solr DIH examples are legacy examples to show how DIH work. However, they 
include full configurations that may obscure teaching points. This is no longer 
needed as we have 3 full-blown examples in the configsets. 

Specifically for Tika, the field types definitions were at some point 
simplified to have less support files in the configuration directory. This, 
however, means that we now have field definitions that have same names as other 
examples, but different definitions. 

Importantly, Tika does not use most (any?) of those modified definitions. They 
are there just for completeness. Similarly, the solrconfig.xml includes extract 
handler even though we are demonstrating a different path of using Tika. 
Somebody grepping through config files may get confused about what 
configuration aspects contributes to what experience.

I am planning to significantly simplify configuration and schema of Tika 
example to **only** show DIH Tika extraction path. It will end-up a very short 
and focused example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9424) Deleting is not happening in solr 5.4.1 with Manifold CF For Sharepoint

2016-10-04 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-9424.
---
Resolution: Information Provided

The last response provided has basic xml error (mismatching tags). The mailing 
list is the better place to resolve this kinds of issues, as already 
recommended.

> Deleting is not happening in solr 5.4.1 with Manifold CF For Sharepoint
> ---
>
> Key: SOLR-9424
> URL: https://issues.apache.org/jira/browse/SOLR-9424
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: soundarya g
>
> Im trying to crawl the Sharepoint List using manifold CF with Solr 5.4.1.whn 
> the particular item got deleted manifold cf is able to send query to solr,but 
> solr is not updating the deleted documents in index.
> Following are Solr logs:
> 2016-08-19 13:16:28.361 INFO  (qtp1450821318-15) [   x:tika] 
> o.a.s.u.p.LogUpdateProcessorFactory [tika] webapp=/solr path=/update 
> params={wt=xml=2.2} 
> {delete=[http://az0165d:2525/sites/ASLC/Lists/DemoList/30_.000 
> (-1543097641453223936)]} 0 11
> 2016-08-19 13:16:28.391 INFO  (commitScheduler-15-thread-1) [   x:tika] 
> o.a.s.u.DirectUpdateHandler2 start 
> commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
> 2016-08-19 13:16:28.422 INFO  (commitScheduler-15-thread-1) [   x:tika] 
> o.a.s.c.SolrDeletionPolicy SolrDeletionPolicy.onCommit: commits: num=2
>   
> commit{dir=NRTCachingDirectory(MMapDirectory@E:\solenewtry\solr-5.4.1\solr-5.4.1\server\solr\tika\data\index
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@38f651f7; 
> maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_c9,generation=441}
>   
> commit{dir=NRTCachingDirectory(MMapDirectory@E:\solenewtry\solr-5.4.1\solr-5.4.1\server\solr\tika\data\index
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@38f651f7; 
> maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_ca,generation=442}
> 2016-08-19 13:16:28.422 INFO  (commitScheduler-15-thread-1) [   x:tika] 
> o.a.s.c.SolrDeletionPolicy newest commit generation = 442
> 2016-08-19 13:16:28.422 INFO  (commitScheduler-15-thread-1) [   x:tika] 
> o.a.s.s.SolrIndexSearcher Opening Searcher@5021dfc7[tika] main
> 2016-08-19 13:16:28.422 INFO  (searcherExecutor-7-thread-1-processing-x:tika) 
> [   x:tika] o.a.s.c.QuerySenderListener QuerySenderListener sending requests 
> to Searcher@5021dfc7[tika] 
> main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_ei(5.4.1):C1)))}
> 2016-08-19 13:16:28.422 INFO  (searcherExecutor-7-thread-1-processing-x:tika) 
> [   x:tika] o.a.s.c.QuerySenderListener QuerySenderListener done.
> 2016-08-19 13:16:28.422 INFO  (searcherExecutor-7-thread-1-processing-x:tika) 
> [   x:tika] o.a.s.c.SolrCore [tika] Registered new searcher 
> Searcher@5021dfc7[tika] 
> main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_ei(5.4.1):C1)))}
> 2016-08-19 13:16:28.438 INFO  (commitScheduler-15-thread-1) [   x:tika] 
> o.a.s.u.DirectUpdateHandler2 end_commit_flush
> 2016-08-19 13:16:30.489 INFO  (qtp1450821318-16) [   x:tika] 
> o.a.s.u.DirectUpdateHandler2 start 
> commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
> 2016-08-19 13:16:30.489 INFO  (qtp1450821318-16) [   x:tika] 
> o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
> 2016-08-19 13:16:30.489 INFO  (qtp1450821318-16) [   x:tika] o.a.s.c.SolrCore 
> SolrIndexSearcher has not changed - not re-opening: 
> org.apache.solr.search.SolrIndexSearcher
> 2016-08-19 13:16:30.489 INFO  (qtp1450821318-16) [   x:tika] 
> o.a.s.u.DirectUpdateHandler2 end_commit_flush
> 2016-08-19 13:16:30.489 INFO  (qtp1450821318-16) [   x:tika] 
> o.a.s.u.p.LogUpdateProcessorFactory [tika] webapp=/solr path=/update/extract 
> params={commit=true=xml=2.2} {commit=} 0 3
> 2016-08-19 13:17:28.801 INFO  (qtp1450821318-14) [   x:tika] 
> o.a.s.c.S.Request [tika] webapp=/solr path=/select 
> params={q=*:*=true=json&_=1471612648791} hits=1 status=0 QTime=0 
> --
> i have committed manully in the browser by giving query like following:
> http://localhost:8981/solr/tika/update?commit=true
> but still deletion is not happening :(



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4645) Missing Adobe XMP library can abort DataImportHandler process

2016-10-04 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-4645:
---

Assignee: Alexandre Rafalovitch

> Missing Adobe XMP library can abort DataImportHandler process
> -
>
> Key: SOLR-4645
> URL: https://issues.apache.org/jira/browse/SOLR-4645
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler, contrib - Solr Cell (Tika 
> extraction)
>Affects Versions: 4.2
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Fix For: 6.0
>
>
> Solr distribution is missing Adobe XMP library ( 
> http://www.adobe.com/devnet/xmp.html ). In particular code path, DIH+Tika 
> tries to load an XMPException and fails with ClassNotFound. The library is 
> present in Tika's distribution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-4826) TikaException Parsing PPTX file

2016-10-04 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-4826.
---
Resolution: Information Provided

An old bug report against Tika (not solvable in Solr directly)

> TikaException Parsing PPTX file
> ---
>
> Key: SOLR-4826
> URL: https://issues.apache.org/jira/browse/SOLR-4826
> Project: Solr
>  Issue Type: Bug
>Reporter: Thomas Weidman
>
> Error parsing PPTX file:
> org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.microsoft.ooxml.OOXMLParser@33d839d1
> org.apache.solr.common.SolrException: 
> org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from 
> org.apache.tika.parser.microsoft.ooxml.OOXMLParser@33d839d1
>   at 
> org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:225)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
>   at 
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:240)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:455)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
>   at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
>   at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
>   at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>   at java.lang.Thread.run(Thread.java:619)
> Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal 
> IOException from org.apache.tika.parser.microsoft.ooxml.OOXMLParser@33d839d1
>   at 
> org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:248)
>   at 
> org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
>   at 
> org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
>   at 
> org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:219)
>   ... 19 more
> Caused by: java.io.IOException: Unable to read entire header; 0 bytes read; 
> expected 512 bytes
>   at 
> org.apache.poi.poifs.storage.HeaderBlock.alertShortRead(HeaderBlock.java:226)
>   at 
> org.apache.poi.poifs.storage.HeaderBlock.readFirst512(HeaderBlock.java:207)
>   at 
> org.apache.poi.poifs.storage.HeaderBlock.init(HeaderBlock.java:104)
>   at 
> org.apache.poi.poifs.filesystem.POIFSFileSystem.init(POIFSFileSystem.java:138)
>   at 
> org.apache.tika.parser.microsoft.ooxml.AbstractOOXMLExtractor.handleEmbeddedOLE(AbstractOOXMLExtractor.java:149)
>   at 
> org.apache.tika.parser.microsoft.ooxml.AbstractOOXMLExtractor.handleEmbeddedParts(AbstractOOXMLExtractor.java:129)
>   at 
> org.apache.tika.parser.microsoft.ooxml.AbstractOOXMLExtractor.getXHTML(AbstractOOXMLExtractor.java:107)
>   at 
> org.apache.tika.parser.microsoft.ooxml.OOXMLExtractorFactory.parse(OOXMLExtractorFactory.java:112)
>   at 
> org.apache.tika.parser.microsoft.ooxml.OOXMLParser.parse(OOXMLParser.java:82)
>   at 
> org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:242)
>   ... 22 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9599) Facet performance regression using fieldcache and new DV iterator API

2016-10-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547253#comment-15547253
 ] 

Yonik Seeley edited comment on SOLR-9599 at 10/5/16 1:01 AM:
-

A quick test of the same fields in the same index shows hits to sorting and 
function queries as well.

With a quick manual test, this was 51% slower:
{code}
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW=id=s10_s%20desc,%20s100_s%20desc,%20s1000_s%20desc
{code}

And this was 78% slower:
{code}
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW%20{!func%20v=$vv}=id=add(exists(s10_s),exists(s100_s),exists(s1000_s))
{code}


was (Author: ysee...@gmail.com):
A quick test of the same fields in the same index shows hits to sorting and 
function queries as well.

With a quick manual test, this was 51% slower:
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW=id=s10_s%20desc,%20s100_s%20desc,%20s1000_s%20desc

And this was 78% slower:
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW%20{!func%20v=$vv}=id=add(exists(s10_s),exists(s100_s),exists(s1000_s))


> Facet performance regression using fieldcache and new DV iterator API
> -
>
> Key: SOLR-9599
> URL: https://issues.apache.org/jira/browse/SOLR-9599
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Yonik Seeley
> Fix For: master (7.0)
>
>
> I did a quick performance comparison of faceting indexed fields (i.e. 
> docvalues are not stored) using method=dv before and after the new docvalues 
> iterator went in (LUCENE-7407).
> 5M document index, 21 segments, single valued string fields w/ no missing 
> values.
> || field cardinality || new_time / old_time ||
> |10|2.01|
> |1000|2.02|
> |1|1.85|
> |10|1.56|
> |100|1.31|
> So unfortunately, often twice as slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9599) Facet performance regression using fieldcache and new DV iterator API

2016-10-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547253#comment-15547253
 ] 

Yonik Seeley commented on SOLR-9599:


A quick test of the same fields in the same index shows hits to sorting and 
function queries as well.

With a quick manual test, this was 51% slower:
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW=id=s10_s%20desc,%20s100_s%20desc,%20s1000_s%20desc

And this was 78% slower:
http://localhost:8983/solr/collection1/query?q=*:*%20mydate_dt:NOW%20{!func%20v=$vv}=id=add(exists(s10_s),exists(s100_s),exists(s1000_s))


> Facet performance regression using fieldcache and new DV iterator API
> -
>
> Key: SOLR-9599
> URL: https://issues.apache.org/jira/browse/SOLR-9599
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Yonik Seeley
> Fix For: master (7.0)
>
>
> I did a quick performance comparison of faceting indexed fields (i.e. 
> docvalues are not stored) using method=dv before and after the new docvalues 
> iterator went in (LUCENE-7407).
> 5M document index, 21 segments, single valued string fields w/ no missing 
> values.
> || field cardinality || new_time / old_time ||
> |10|2.01|
> |1000|2.02|
> |1|1.85|
> |10|1.56|
> |100|1.31|
> So unfortunately, often twice as slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9600) RulesTest.doIntegrationTest() failures

2016-10-04 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547211#comment-15547211
 ] 

Steve Rowe commented on SOLR-9600:
--

Pinging [~noble.paul] to see if he knows what's happening here.

> RulesTest.doIntegrationTest() failures
> --
>
> Key: SOLR-9600
> URL: https://issues.apache.org/jira/browse/SOLR-9600
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>
> My Jenkins has seen this test fail about 8 times today, mostly on branch_6x 
> but also on master, e.g. 
> [http://jenkins.sarowe.net/job/Lucene-Solr-tests-6.x/3049/], 
> [http://jenkins.sarowe.net/job/Lucene-Solr-tests-master/8833/].  This is new 
> - previous failure on my Jenkins was from August.  The failures aren't 100% 
> reproducible.
> From Policeman Jenkins 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6158]:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=RulesTest 
> -Dtests.method=doIntegrationTest -Dtests.seed=D12AC7FA27544B42 
> -Dtests.slow=true -Dtests.locale=de-DE -Dtests.timezone=America/New_York 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
>[junit4] ERROR   14.1s J0 | RulesTest.doIntegrationTest <<<
>[junit4]> Throwable #1: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:51451/solr: Could not identify nodes matching 
> the rules [{"cores":"<4"}, {
>[junit4]>   "replica":"<2",
>[junit4]>   "node":"*"}, {"freedisk":">1"}]
>[junit4]>  tag values{
>[junit4]>   "127.0.0.1:51451_solr":{
>[junit4]> "node":"127.0.0.1:51451_solr",
>[junit4]> "cores":3,
>[junit4]> "freedisk":31},
>[junit4]>   "127.0.0.1:51444_solr":{
>[junit4]> "node":"127.0.0.1:51444_solr",
>[junit4]> "cores":1,
>[junit4]> "freedisk":31},
>[junit4]>   "127.0.0.1:51461_solr":{
>[junit4]> "node":"127.0.0.1:51461_solr",
>[junit4]> "cores":2,
>[junit4]> "freedisk":31},
>[junit4]>   "127.0.0.1:51441_solr":{
>[junit4]> "node":"127.0.0.1:51441_solr",
>[junit4]> "cores":2,
>[junit4]> "freedisk":31},
>[junit4]>   "127.0.0.1:51454_solr":{
>[junit4]> "node":"127.0.0.1:51454_solr",
>[junit4]> "cores":2,
>[junit4]> "freedisk":31}}
>[junit4]> Initial state for the coll : {
>[junit4]>   "shard1":{
>[junit4]> "127.0.0.1:51454_solr":1,
>[junit4]> "127.0.0.1:51444_solr":1},
>[junit4]>   "shard2":{
>[junit4]> "127.0.0.1:51461_solr":1,
>[junit4]> "127.0.0.1:51441_solr":1}}
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D12AC7FA27544B42:3419807B3B20B940]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1288)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1058)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1000)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>[junit4]>  at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
>[junit4]>  at 
> org.apache.solr.cloud.rule.RulesTest.doIntegrationTest(RulesTest.java:81)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Beasting current master with Miller's beasting script resulted in 6 failures 
> out of 50 iterations.
> I'm running {{git bisect}} in combination with beasting to see if I can find 
> the commit where this started happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9600) RulesTest.doIntegrationTest() failures

2016-10-04 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-9600:


 Summary: RulesTest.doIntegrationTest() failures
 Key: SOLR-9600
 URL: https://issues.apache.org/jira/browse/SOLR-9600
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Steve Rowe


My Jenkins has seen this test fail about 8 times today, mostly on branch_6x but 
also on master, e.g. 
[http://jenkins.sarowe.net/job/Lucene-Solr-tests-6.x/3049/], 
[http://jenkins.sarowe.net/job/Lucene-Solr-tests-master/8833/].  This is new - 
previous failure on my Jenkins was from August.  The failures aren't 100% 
reproducible.

>From Policeman Jenkins 
>[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6158]:
{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=RulesTest 
-Dtests.method=doIntegrationTest -Dtests.seed=D12AC7FA27544B42 
-Dtests.slow=true -Dtests.locale=de-DE -Dtests.timezone=America/New_York 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII
   [junit4] ERROR   14.1s J0 | RulesTest.doIntegrationTest <<<
   [junit4]> Throwable #1: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:51451/solr: Could not identify nodes matching 
the rules [{"cores":"<4"}, {
   [junit4]>   "replica":"<2",
   [junit4]>   "node":"*"}, {"freedisk":">1"}]
   [junit4]>  tag values{
   [junit4]>   "127.0.0.1:51451_solr":{
   [junit4]> "node":"127.0.0.1:51451_solr",
   [junit4]> "cores":3,
   [junit4]> "freedisk":31},
   [junit4]>   "127.0.0.1:51444_solr":{
   [junit4]> "node":"127.0.0.1:51444_solr",
   [junit4]> "cores":1,
   [junit4]> "freedisk":31},
   [junit4]>   "127.0.0.1:51461_solr":{
   [junit4]> "node":"127.0.0.1:51461_solr",
   [junit4]> "cores":2,
   [junit4]> "freedisk":31},
   [junit4]>   "127.0.0.1:51441_solr":{
   [junit4]> "node":"127.0.0.1:51441_solr",
   [junit4]> "cores":2,
   [junit4]> "freedisk":31},
   [junit4]>   "127.0.0.1:51454_solr":{
   [junit4]> "node":"127.0.0.1:51454_solr",
   [junit4]> "cores":2,
   [junit4]> "freedisk":31}}
   [junit4]> Initial state for the coll : {
   [junit4]>   "shard1":{
   [junit4]> "127.0.0.1:51454_solr":1,
   [junit4]> "127.0.0.1:51444_solr":1},
   [junit4]>   "shard2":{
   [junit4]> "127.0.0.1:51461_solr":1,
   [junit4]> "127.0.0.1:51441_solr":1}}
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([D12AC7FA27544B42:3419807B3B20B940]:0)
   [junit4]>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
   [junit4]>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
   [junit4]>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
   [junit4]>at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439)
   [junit4]>at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1288)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1058)
   [junit4]>at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1000)
   [junit4]>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
   [junit4]>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
   [junit4]>at 
org.apache.solr.cloud.rule.RulesTest.doIntegrationTest(RulesTest.java:81)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
{noformat}

Beasting current master with Miller's beasting script resulted in 6 failures 
out of 50 iterations.

I'm running {{git bisect}} in combination with beasting to see if I can find 
the commit where this started happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9599) Facet performance regression using fieldcache and new DV iterator API

2016-10-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547177#comment-15547177
 ] 

Yonik Seeley commented on SOLR-9599:


Here's the form of the request used (tests were run w/ logging at WARN level of 
course):
{code}
2016-10-05 00:25:24.042 INFO  (qtp110456297-16) [   x:collection1] 
o.a.s.c.S.Request [collection1]  webapp=/solr path=/select 
params={q=*:*={x:{method:dv,+type:terms,field:s10_s,limit:5}}=0=javabin}
 hits=4993847 status=0 QTime=174
{code}

> Facet performance regression using fieldcache and new DV iterator API
> -
>
> Key: SOLR-9599
> URL: https://issues.apache.org/jira/browse/SOLR-9599
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Yonik Seeley
> Fix For: master (7.0)
>
>
> I did a quick performance comparison of faceting indexed fields (i.e. 
> docvalues are not stored) using method=dv before and after the new docvalues 
> iterator went in (LUCENE-7407).
> 5M document index, 21 segments, single valued string fields w/ no missing 
> values.
> || field cardinality || new_time / old_time ||
> |10|2.01|
> |1000|2.02|
> |1|1.85|
> |10|1.56|
> |100|1.31|
> So unfortunately, often twice as slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9599) Facet performance regression using fieldcache and new DV iterator API

2016-10-04 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-9599:
--

 Summary: Facet performance regression using fieldcache and new DV 
iterator API
 Key: SOLR-9599
 URL: https://issues.apache.org/jira/browse/SOLR-9599
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (7.0)
Reporter: Yonik Seeley
 Fix For: master (7.0)


I did a quick performance comparison of faceting indexed fields (i.e. docvalues 
are not stored) using method=dv before and after the new docvalues iterator 
went in (LUCENE-7407).

5M document index, 21 segments, single valued string fields w/ no missing 
values.

|| field cardinality || new_time / old_time ||
|10|2.01|
|1000|2.02|
|1|1.85|
|10|1.56|
|100|1.31|

So unfortunately, often twice as slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3561) Error during deletion of shard/core

2016-10-04 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch closed SOLR-3561.
---
   Resolution: Implemented
Fix Version/s: (was: 6.0)
   (was: 4.9)

Old issue that may have been resolved by other issues mentioned. If a similar 
problems shows up later, a new issue can be opened with more-specific details.

> Error during deletion of shard/core
> ---
>
> Key: SOLR-3561
> URL: https://issues.apache.org/jira/browse/SOLR-3561
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, replication (java), SolrCloud
>Affects Versions: 4.0-ALPHA
> Environment: Solr trunk (4.0-SNAPSHOT) from 29/2-2012
>Reporter: Per Steffensen
>Assignee: Mark Miller
>
> Running several Solr servers in Cloud-cluster (zkHost set on the Solr 
> servers).
> Several collections with several slices and one replica for each slice (each 
> slice has two shards)
> Basically we want let our system delete an entire collection. We do this by 
> trying to delete each and every shard under the collection. Each shard is 
> deleted one by one, by doing CoreAdmin-UNLOAD-requests against the relevant 
> Solr
> {code}
> CoreAdminRequest request = new CoreAdminRequest();
> request.setAction(CoreAdminAction.UNLOAD);
> request.setCoreName(shardName);
> CoreAdminResponse resp = request.process(new CommonsHttpSolrServer(solrUrl));
> {code}
> The delete/unload succeeds, but in like 10% of the cases we get errors on 
> involved Solr servers, right around the time where shard/cores are deleted, 
> and we end up in a situation where ZK still claims (forever) that the deleted 
> shard is still present and active.
> Form here the issue is easilier explained by a more concrete example:
> - 7 Solr servers involved
> - Several collection a.o. one called "collection_2012_04", consisting of 28 
> slices, 56 shards (remember 1 replica for each slice) named 
> "collection_2012_04_sliceX_shardY" for all pairs in {X:1..28}x{Y:1,2}
> - Each Solr server running 8 shards, e.g Solr server #1 is running shard 
> "collection_2012_04_slice1_shard1" and Solr server #7 is running shard 
> "collection_2012_04_slice1_shard2" belonging to the same slice "slice1".
> When we decide to delete the collection "collection_2012_04" we go through 
> all 56 shards and delete/unload them one-by-one - including 
> "collection_2012_04_slice1_shard1" and "collection_2012_04_slice1_shard2". At 
> some point during or shortly after all this deletion we see the following 
> exceptions in solr.log on Solr server #7
> {code}
> Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
> SEVERE: Error while trying to recover:org.apache.solr.common.SolrException: 
> core not found:collection_2012_04_slice1_shard1
> request: 
> http://solr_server_1:8983/solr/admin/cores?action=PREPRECOVERY=collection_2012_04_slice1_shard1=solr_server_7%3A8983_solr=solr_server_7%3A8983_solr_collection_2012_04_slice1_shard2=recovering=true=6000=javabin=2
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.solr.common.SolrExceptionPropagationHelper.decodeFromMsg(SolrExceptionPropagationHelper.java:29)
> at 
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:445)
> at 
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:264)
> at 
> org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:188)
> at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:285)
> at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:206)
> Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
> SEVERE: Recovery failed - trying again...
> Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process
> WARNING:
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
> at java.util.ArrayList.RangeCheck(ArrayList.java:547)
> at java.util.ArrayList.get(ArrayList.java:322)
> at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:96)
> at org.apache.solr.cloud.LeaderElector.access$000(LeaderElector.java:57)
> at org.apache.solr.cloud.LeaderElector$1.process(LeaderElector.java:121)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:507)
> Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process
> {code}
> Im not sure exactly how to interpret this, but it 

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_102) - Build # 6158 - Still Unstable!

2016-10-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6158/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([D12AC7FA27544B42:B995F2D0F7CE59AE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:140)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-9597) Add setReadOnly(String ...) to ConnectionImpl

2016-10-04 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9597:
---
Affects Version/s: master (7.0)

> Add setReadOnly(String ...) to ConnectionImpl
> -
>
> Key: SOLR-9597
> URL: https://issues.apache.org/jira/browse/SOLR-9597
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.2.1, master (7.0)
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Minor
> Attachments: SOLR-9597.patch
>
>
> When using OpenLink ODBC-JDBC bridge on Windows, it tries to run the method 
> ConnectionImpl.setReadOnly(String ...). The spec says that 
> setReadOnly(boolean ...) is required. This causes the bridge to fail on 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9597) Add setReadOnly(String ...) to ConnectionImpl

2016-10-04 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9597:
---
Attachment: SOLR-9597.patch

> Add setReadOnly(String ...) to ConnectionImpl
> -
>
> Key: SOLR-9597
> URL: https://issues.apache.org/jira/browse/SOLR-9597
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.2.1, master (7.0)
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Minor
> Attachments: SOLR-9597.patch
>
>
> When using OpenLink ODBC-JDBC bridge on Windows, it tries to run the method 
> ConnectionImpl.setReadOnly(String ...). The spec says that 
> setReadOnly(boolean ...) is required. This causes the bridge to fail on 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 465 - Still unstable

2016-10-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/465/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.response.transform.TestSubQueryTransformerDistrib

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.response.transform.TestSubQueryTransformerDistrib: 1) 
Thread[id=1369, 
name=OverseerHdfsCoreFailoverThread-96706052996988937-127.0.0.1:38475_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.response.transform.TestSubQueryTransformerDistrib: 
   1) Thread[id=1369, 
name=OverseerHdfsCoreFailoverThread-96706052996988937-127.0.0.1:38475_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([B6C73D6CCF57C41A]:0)




Build Log:
[...truncated 10791 lines...]
   [junit4] Suite: 
org.apache.solr.response.transform.TestSubQueryTransformerDistrib
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/test/J2/temp/solr.response.transform.TestSubQueryTransformerDistrib_B6C73D6CCF57C41A-001/init-core-data-001
   [junit4]   2> 347909 INFO  
(SUITE-TestSubQueryTransformerDistrib-seed#[B6C73D6CCF57C41A]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=None)
   [junit4]   2> 347910 INFO  
(SUITE-TestSubQueryTransformerDistrib-seed#[B6C73D6CCF57C41A]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 5 servers in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/test/J2/temp/solr.response.transform.TestSubQueryTransformerDistrib_B6C73D6CCF57C41A-001/tempDir-001
   [junit4]   2> 347910 INFO  
(SUITE-TestSubQueryTransformerDistrib-seed#[B6C73D6CCF57C41A]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 347913 INFO  (Thread-392) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 347914 INFO  (Thread-392) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 348017 INFO  
(SUITE-TestSubQueryTransformerDistrib-seed#[B6C73D6CCF57C41A]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:51643
   [junit4]   2> 348033 INFO  (jetty-launcher-151-thread-1) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 348038 INFO  (jetty-launcher-151-thread-5) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 348039 INFO  (jetty-launcher-151-thread-4) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 348047 INFO  (jetty-launcher-151-thread-2) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 348098 INFO  (jetty-launcher-151-thread-5) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@2ad1{/solr,null,AVAILABLE}
   [junit4]   2> 348099 INFO  (jetty-launcher-151-thread-5) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@4d016fb0{HTTP/1.1,[http/1.1]}{127.0.0.1:39449}
   [junit4]   2> 348099 INFO  (jetty-launcher-151-thread-5) [] 
o.e.j.s.Server Started @356564ms
   [junit4]   2> 348099 INFO  (jetty-launcher-151-thread-5) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=39449}
   [junit4]   2> 348099 INFO  (jetty-launcher-151-thread-5) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr™ version 
6.3.0
   [junit4]   2> 348099 INFO  (jetty-launcher-151-thread-5) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 348099 INFO  (jetty-launcher-151-thread-5) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 348099 INFO  (jetty-launcher-151-thread-5) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2016-10-04T21:41:03.932Z
   [junit4]   2> 348222 INFO  (jetty-launcher-151-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@2e50d386{/solr,null,AVAILABLE}
   [junit4]   2> 348222 INFO  (jetty-launcher-151-thread-1) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@74d6604d{HTTP/1.1,[http/1.1]}{127.0.0.1:38475}
   [junit4]   2> 348222 INFO  (jetty-launcher-151-thread-1) [] 
o.e.j.s.Server Started @356688ms
   [junit4]   2> 348222 INFO  (jetty-launcher-151-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=38475}
   [junit4]   2> 348223 INFO  

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 17971 - Failure!

2016-10-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17971/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 13264 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-solrj/test/temp/junit4-J0-20161004_222131_721.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] Dumping heap to 
/home/jenkins/workspace/Lucene-Solr-master-Linux/heapdumps/java_pid9076.hprof 
...
   [junit4] Heap dump file created [554944563 bytes in 1.441 secs]
   [junit4] <<< JVM J0: EOF 

[...truncated 10376 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:763: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:715: Some of the 
tests produced a heap dump, but did not fail. Maybe a suppressed 
OutOfMemoryError? Dumps created:
* java_pid9076.hprof

Total time: 56 minutes 14 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-9182) Test OOMs when ssl + clientAuth

2016-10-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546859#comment-15546859
 ] 

Hoss Man commented on SOLR-9182:


bq.  ...it's questionable when to evict that context from cache...

If you now have a reproducible test that verifies if/when connections are 
getting re-used by inspecting the PoolStats from the 
PoolingHttpClientConnectionManager, then perhaps we don't need to a (solrj 
coded/managed) cache of HttpClientContexts at all? ... why not revisit alan's 
earlier patch of setting some simple singleton token so that the 
ConnectionManager knows *every* request it gets can re-use the same connections 
.. and then let the test verify that the ConnectionManager actually does that 
for us.

or am i missunderstanding the root cause?

> Test OOMs when ssl + clientAuth
> ---
>
> Key: SOLR-9182
> URL: https://issues.apache.org/jira/browse/SOLR-9182
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: DistributedFacetPivotLongTailTest-heapintro.png, 
> SOLR-9182.patch, SOLR-9182.patch, SOLR-9182.patch
>
>
> the combination of SOLR-9028 fixing SSLTestConfig to actually pay attention 
> to clientAuth setting, and SOLR-9107 increasing the odds of ssl+clientAuth 
> being tested has helped surface some more tests that seem to fairly 
> consistently trigger OOM when running with SSL+clientAuth.
> I'm not sure if there is some underlying memory leak somewhere in the SSL 
> code we're using, or if this is just a factor of increased request/response 
> size when using (double) encrypted requests, but for now I'm just focusing on 
> opening a tracking issue for them and suppressing SSL in these cases with a 
> link here to clarify *why* we're suppressing SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9182) Test OOMs when ssl + clientAuth

2016-10-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546846#comment-15546846
 ] 

Hoss Man commented on SOLR-9182:


FWIW: You guys should probably consider creating a new jira with a more 
specific, on-point, summary & description regarding the underlying bug (ie: 
"connections not being reused by client when SSL clientAuth enabled", and then 
mark this issue as being blocked by the new one.

that way the nature of the underlying issue you're working to fix is more 
obvious to people skimming jira subjects/searches.

> Test OOMs when ssl + clientAuth
> ---
>
> Key: SOLR-9182
> URL: https://issues.apache.org/jira/browse/SOLR-9182
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: DistributedFacetPivotLongTailTest-heapintro.png, 
> SOLR-9182.patch, SOLR-9182.patch, SOLR-9182.patch
>
>
> the combination of SOLR-9028 fixing SSLTestConfig to actually pay attention 
> to clientAuth setting, and SOLR-9107 increasing the odds of ssl+clientAuth 
> being tested has helped surface some more tests that seem to fairly 
> consistently trigger OOM when running with SSL+clientAuth.
> I'm not sure if there is some underlying memory leak somewhere in the SSL 
> code we're using, or if this is just a factor of increased request/response 
> size when using (double) encrypted requests, but for now I'm just focusing on 
> opening a tracking issue for them and suppressing SSL in these cases with a 
> link here to clarify *why* we're suppressing SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9592) decorateDocValues cause serious performance issue because of using slowCompositeReaderWrapper

2016-10-04 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546847#comment-15546847
 ] 

Yonik Seeley commented on SOLR-9592:


bq. Are you disagree with renaming(-1) or weakly agree with(+0)? 

I supposed weakly agree :-)

bq. Because, at first glance, getLeafReader 

Yeah, I really don't care for that name.

bq. If my understanding is clear, we should use MultiDocValues in cases where 
you essencially need global view and decorateDocValues usage is not the case 
right?

Right.  Sometimes you need both a global view and a segment view to do it 
right.  See something like FacetFieldProcessorByArrayDV, where we use both top 
level and segment level.

decorateDocValues would seem to only need segment level access.

> decorateDocValues cause serious performance issue because of using 
> slowCompositeReaderWrapper
> -
>
> Key: SOLR-9592
> URL: https://issues.apache.org/jira/browse/SOLR-9592
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers, search
>Affects Versions: 6.0, 6.1, 6.2
>Reporter: Takahiro Ishikawa
>  Labels: performance
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9592.patch, SOLR-9592.patch, SOLR-9592_6x.patch
>
>
> I have serious performance issue using AtomicUpdate (and RealtimeGet) with 
> non stored docValues.
> Because decorateDocValues try to merge each leafLeader on the fly via 
> slowCompositeReaderWrapper and it’s extremely slow (> 10sec).
> Simply access docValues via nonCompositeReader could resolve this 
> issue.(patch) 
> AtomicUpdate performance(or RealtimeGet performance)
> * Environment
> ** solr version : 6.0.0
> ** schema ~ 100 fields(90% docValues, some of those are multi valued)
> ** index : 5,000,000
> * Performance
> ** original :  > 10sec per query
> ** patched : at least 100msec per query
> This patch will also enhance search performance, because DocStreamer also 
> fetch docValues via decorateDocValues.
> Though it depends on each environment, I could take 20% search performance 
> gain.
> (This patch originally written for solr 6.0.0, and now rewritten for master)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9598) Solr RESTORE api doesn't wait for the restored collection to be fully ready for usage

2016-10-04 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546807#comment-15546807
 ] 

Hrishikesh Gadre commented on SOLR-9598:


[~varunthacker] [~dsmiley] Please let me know if you have any concerns with 
this proposal.

> Solr RESTORE api doesn't wait for the restored collection to be fully ready 
> for usage
> -
>
> Key: SOLR-9598
> URL: https://issues.apache.org/jira/browse/SOLR-9598
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
>Reporter: Hrishikesh Gadre
>
> As part of the RESTORE operation, Solr creates a new collection and adds 
> necessary number of replicas to each shard. The problem is that this 
> operation doesn't wait for this new collection to be fully ready for usage 
> (e.g. querying and indexing). This requires extra checks on the client side 
> to make sure that the recovery is complete and reflected in cluster status 
> stored in Zookeeper. e.g. refer to the backup/restore unit test for this 
> check,
> https://github.com/apache/lucene-solr/blob/722e82712435ecf46c9868137d885484152f749b/solr/core/src/test/org/apache/solr/cloud/AbstractCloudBackupRestoreTestCase.java#L234
> Ideally this check should be implemented in the RESTORE operation itself. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9598) Solr RESTORE api doesn't wait for the restored collection to be fully ready for usage

2016-10-04 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-9598:
--

 Summary: Solr RESTORE api doesn't wait for the restored collection 
to be fully ready for usage
 Key: SOLR-9598
 URL: https://issues.apache.org/jira/browse/SOLR-9598
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.2
Reporter: Hrishikesh Gadre


As part of the RESTORE operation, Solr creates a new collection and adds 
necessary number of replicas to each shard. The problem is that this operation 
doesn't wait for this new collection to be fully ready for usage (e.g. querying 
and indexing). This requires extra checks on the client side to make sure that 
the recovery is complete and reflected in cluster status stored in Zookeeper. 
e.g. refer to the backup/restore unit test for this check,

https://github.com/apache/lucene-solr/blob/722e82712435ecf46c9868137d885484152f749b/solr/core/src/test/org/apache/solr/cloud/AbstractCloudBackupRestoreTestCase.java#L234

Ideally this check should be implemented in the RESTORE operation itself. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7465) Add a PatternTokenizer that uses Lucene's RegExp implementation

2016-10-04 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546776#comment-15546776
 ] 

Michael McCandless commented on LUCENE-7465:


Thank you for the example [~dweiss].  Indeed that's a hard regexp to 
determinize.  It's interesting because the determinization requires many 
states, yet it minimizes to an apparently contained number of states (though 
many transitions).

E.g. at 30 clauses, determized form produced 7652 states and 136898 
transitions, but after minimize that drops to 150 states and 2960 transitions.  
I tried to run {{dot}} on this FSA but it struggles :)

Net/net the DFA approach is not usable in some cases (like this one); such 
users must use the JDK implementation.  Maybe we should explore an {{re2j}} 
version too.

bq. Btw. if you're looking into this again, piggyback a change to 
Operations.determinize and replace LinkedList with an ArrayDeque, it certainly 
won't hurt.

Excellent, I'll fold that in!

> Add a PatternTokenizer that uses Lucene's RegExp implementation
> ---
>
> Key: LUCENE-7465
> URL: https://issues.apache.org/jira/browse/LUCENE-7465
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7465.patch, LUCENE-7465.patch
>
>
> I think there are some nice benefits to a version of PatternTokenizer that 
> uses Lucene's RegExp impl instead of the JDK's:
>   * Lucene's RegExp is compiled to a DFA up front, so if a "too hard" RegExp 
> is attempted the user discovers it up front instead of later on when a 
> "lucky" document arrives
>   * It processes the incoming characters as a stream, only pulling 128 
> characters at a time, vs the existing {{PatternTokenizer}} which currently 
> reads the entire string up front (this has caused heap problems in the past)
>   * It should be fast.
> I named it {{SimplePatternTokenizer}}, and it still needs a factory and 
> improved tests, but I think it's otherwise close.
> It currently does not take a {{group}} parameter because Lucene's RegExps 
> don't yet implement sub group capture.  I think we could add that at some 
> point, but it's a bit tricky.
> This doesn't even have group=-1 support (like String.split) ... I think if we 
> did that we should maybe name it differently 
> ({{SimplePatternSplitTokenizer}}?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7398) Nested Span Queries are buggy

2016-10-04 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-7398:
-
Attachment: LUCENE-7398.patch

Patch of 4 Oct 2016.

This is the patch of 25 Sep 2016, but without the UNORDERED_STARTPOS case.

In a nutshell this:
- adds ORDERED_LOOKAHEAD, 
- is backward compatible,
- tries to document the limitations of the matching methods for SpanNearQuery.


> Nested Span Queries are buggy
> -
>
> Key: LUCENE-7398
> URL: https://issues.apache.org/jira/browse/LUCENE-7398
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5, 6.x
>Reporter: Christoph Goller
>Assignee: Alan Woodward
>Priority: Critical
> Attachments: LUCENE-7398-20160814.patch, LUCENE-7398-20160924.patch, 
> LUCENE-7398-20160925.patch, LUCENE-7398.patch, LUCENE-7398.patch, 
> LUCENE-7398.patch, TestSpanCollection.java
>
>
> Example for a nested SpanQuery that is not working:
> Document: Human Genome Organization , HUGO , is trying to coordinate gene 
> mapping research worldwide.
> Query: spanNear([body:coordinate, spanOr([spanNear([body:gene, body:mapping], 
> 0, true), body:gene]), body:research], 0, true)
> The query should match "coordinate gene mapping research" as well as 
> "coordinate gene research". It does not match  "coordinate gene mapping 
> research" with Lucene 5.5 or 6.1, it did however match with Lucene 4.10.4. It 
> probably stopped working with the changes on SpanQueries in 5.3. I will 
> attach a unit test that shows the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 431 - Failure!

2016-10-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/431/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 53590 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /var/tmp/ecj1427533562
 [ecj-lint] Compiling 232 source files to /var/tmp/ecj1427533562
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/queryparser/src/java/org/apache/lucene/queryparser/classic/MultiFieldQueryParser.java
 (at line 30)
 [ecj-lint] import org.apache.lucene.search.TermQuery;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.lucene.search.TermQuery is never used
 [ecj-lint] --
 [ecj-lint] 1 problem (1 error)

BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/build.xml:763: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/build.xml:101: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build.xml:204: 
The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/common-build.xml:2177:
 The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/common-build.xml:1992:
 The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/common-build.xml:2031:
 Compile failed; see the compiler error output for details.

Total time: 81 minutes 3 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: [DISCUSS] JIRA maintenance and response times

2016-10-04 Thread Jeff Wartes
I’m not a committer, but Solr is the first major open source project I’ve 
followed closely enough to get a feel for the actual community. The issues 
coming up in this thread have been rattling around in my head for at least the 
last year, and I’m absolutely thrilled to see this conversation. I nearly 
brought it up myself on couple of occasions, but I wasn’t sure how much of this 
was common to all open source (volunteer-driven) projects.

I’ve filed Solr issues, some with patches, some of which have been accepted. 
Some went a different way than I expected, which is fine, but unfortunately 
some also just rot, with the chances of them being useful going down every day. 
I’d rather get a No.

There have been a few cases where I’ve worked on a patch after discussing and 
getting a favorable opinion from a committer, but even that doesn’t seem to 
offer any assurance that the work will ever be reviewed.

And frankly, yes, this effects how I deal with Solr issues. Among other things, 
it discourages me from contributing more work until I see attention to the 
stuff I’ve already provided. 

Anything to help call out issues that need attention would be greatly 
appreciated, I think. I have a suspicion that the Jira-notificaiton-firehose is 
the most common notification mechanism, and people generally just look at 
whatever came out most recently there. Meaning, if something blew past 
unnoticed, it’s gone forever.



On 9/29/16, 11:32 AM, "Erick Erickson"  wrote:

Big +1 to this statement:

***
To me, the most urgent aspect of the problem is that Bugs are not
getting verified and fixed as soon as possible, and non-committers
(particularly) who take the time to create a patch for an improvement
are not seeing their efforts acknowledged, let alone reviewed or
committed


This hits the nail on the head IMO. I wonder how many potential
committers we've lost through inaction? Yonik's line about "you
get to be a committer by acting like a committer" comes to mind.
We have people "acting like committers" by submitting
patches and the like then don't get back to them.

Of course we all have our day jobs, limited time and at least
some of us have these things called "lives".

I'm not sure how to resolve the issue either. It can take
significant time to even look at a patch and give any reasonable
feedback

I'm glad for the conversation too, just wish I had a magic cure.

Erick


On Thu, Sep 29, 2016 at 10:35 AM, Cassandra Targett
 wrote:
> On Thu, Sep 29, 2016 at 7:01 AM, Stefan Matheis  wrote:
>
>> first idea about it: we could bring a script or something that collects 
once a week information about all new issues and sends it to the dev-list? so 
get a quick overview about what happend last week w/o too much trouble?
>>
>
> +1 to this idea - awareness of the problem is the first step to being
> able to change it. And I agree it is a problem.
>
> It's enough of a problem that at Lucidworks we have added it to our
> priority list for the next year. Consequently, I've spent quite a bit
> of time looking at old issues in the past couple of months.
>
> To me, the most urgent aspect of the problem is that Bugs are not
> getting verified and fixed as soon as possible, and non-committers
> (particularly) who take the time to create a patch for an improvement
> are not seeing their efforts acknowledged, let alone reviewed or
> committed. I think this causes more bad impressions than someone's
> good idea for a new feature that doesn't get implemented. (BTW, Bugs
> alone make up 44% of all issues older than 6 months; Improvements are
> another 38% of old issues.)
>
> I fear a 7-day respond-or-close policy would frustrate people more.
> Users would see their issues now closed instead of just ignored, and
> if it gets a +1 from someone to stay open, it can still sit for the
> next 5 years the same way as today. We need to take that idea a step
> further.
>
> What would I suggest instead? Not sure. One very small suggestion is
> to add to Stefan's idea and send out a weekly mail about age of issues
> - # of issues over 6 months, % increase/decrease, # of bugs with no
> action in X days, # of improvements with patches that have no action
> in X days.
>
> Another idea is to have some kind of "parked" state in JIRA - like,
> not Closed but not Open either. I'm not convinced that won't add to
> the noise, but it might at least give us a better sense for ideas we
> just haven't gotten to and issues we haven't really looked at yet.
>
> Thanks for bringing this up, Jan. It's a necessary conversation to have.
>
> 

[jira] [Commented] (LUCENE-7472) MultiFieldQueryParser.getFieldQuery() drops queries that are neither BooleanQuery nor TermQuery

2016-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546589#comment-15546589
 ] 

ASF subversion and git services commented on LUCENE-7472:
-

Commit f9e915b3dac62b101ae7b4be343dbf918ccd0389 in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f9e915b ]

LUCENE-7472: remove unused import


> MultiFieldQueryParser.getFieldQuery() drops queries that are neither 
> BooleanQuery nor TermQuery 
> 
>
> Key: LUCENE-7472
> URL: https://issues.apache.org/jira/browse/LUCENE-7472
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.3, 6.2.2
>
> Attachments: LUCENE-7472.patch
>
>
> From 
> [http://mail-archives.apache.org/mod_mbox/lucene-java-user/201609.mbox/%3c944985a6ac27425681bd27abe9d90...@ska-wn-e132.ptvag.ptv.de%3e],
>  Oliver Kaleske reports:
> {quote}
> Hi,
> in updating Lucene from 6.1.0 to 6.2.0 I came across the following:
> We have a subclass of MultiFieldQueryParser (MFQP) for creating a custom type 
> of Query, which calls getFieldQuery() on its base class (MFQP).
> For each of its search fields, this method has a Query created by calling 
> getFieldQuery() on QueryParserBase.
> Ultimately, we wind up in QueryBuilder's createFieldQuery() method, which 
> depending on the number of tokens (etc.) decides what type of Query to 
> return: a TermQuery, BooleanQuery, PhraseQuery, or MultiPhraseQuery.
> Back in MFQP.getFieldQuery(), a variable maxTerms is determined depending on 
> the type of Query returned: for a TermQuery or a BooleanQuery, its value will 
> in general be nonzero, clauses are created, and a non-null Query is returned.
> However, other Query subclasses result in maxTerms=0, an empty list of 
> clauses, and finally null is returned.
> To me, this seems like a bug, but I might as well be missing something. The 
> comment "// happens for stopwords" on the return null statement, however, 
> seems to suggest that Query types other than TermQuery and BooleanQuery were 
> not considered properly here.
> I should point out that our custom MFQP subclass so far does some rather 
> unsophisticated tokenization before calling getFieldQuery() on each token, so 
> characters like '*' may still slip through. So perhaps with proper 
> tokenization, it is guaranteed that only TermQuery and BooleanQuery can come 
> out of the chain of getFieldQuery() calls, and not handling 
> (Multi)PhraseQuery in MFQP.getFieldQuery() can never cause trouble?
> The code in MFQP.getFieldQuery dates back to
> LUCENE-2605: Add classic QueryParser option setSplitOnWhitespace() to control 
> whether to split on whitespace prior to text analysis.  Default behavior 
> remains unchanged: split-on-whitespace=true.
> (06 Jul 2016), when it was substantially expanded.
> Best regards,
> Oliver
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7472) MultiFieldQueryParser.getFieldQuery() drops queries that are neither BooleanQuery nor TermQuery

2016-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546588#comment-15546588
 ] 

ASF subversion and git services commented on LUCENE-7472:
-

Commit 4e7c6141a2afaff454cfc364dd02c8abb838c218 in lucene-solr's branch 
refs/heads/branch_6_2 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4e7c614 ]

LUCENE-7472: remove unused import


> MultiFieldQueryParser.getFieldQuery() drops queries that are neither 
> BooleanQuery nor TermQuery 
> 
>
> Key: LUCENE-7472
> URL: https://issues.apache.org/jira/browse/LUCENE-7472
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.3, 6.2.2
>
> Attachments: LUCENE-7472.patch
>
>
> From 
> [http://mail-archives.apache.org/mod_mbox/lucene-java-user/201609.mbox/%3c944985a6ac27425681bd27abe9d90...@ska-wn-e132.ptvag.ptv.de%3e],
>  Oliver Kaleske reports:
> {quote}
> Hi,
> in updating Lucene from 6.1.0 to 6.2.0 I came across the following:
> We have a subclass of MultiFieldQueryParser (MFQP) for creating a custom type 
> of Query, which calls getFieldQuery() on its base class (MFQP).
> For each of its search fields, this method has a Query created by calling 
> getFieldQuery() on QueryParserBase.
> Ultimately, we wind up in QueryBuilder's createFieldQuery() method, which 
> depending on the number of tokens (etc.) decides what type of Query to 
> return: a TermQuery, BooleanQuery, PhraseQuery, or MultiPhraseQuery.
> Back in MFQP.getFieldQuery(), a variable maxTerms is determined depending on 
> the type of Query returned: for a TermQuery or a BooleanQuery, its value will 
> in general be nonzero, clauses are created, and a non-null Query is returned.
> However, other Query subclasses result in maxTerms=0, an empty list of 
> clauses, and finally null is returned.
> To me, this seems like a bug, but I might as well be missing something. The 
> comment "// happens for stopwords" on the return null statement, however, 
> seems to suggest that Query types other than TermQuery and BooleanQuery were 
> not considered properly here.
> I should point out that our custom MFQP subclass so far does some rather 
> unsophisticated tokenization before calling getFieldQuery() on each token, so 
> characters like '*' may still slip through. So perhaps with proper 
> tokenization, it is guaranteed that only TermQuery and BooleanQuery can come 
> out of the chain of getFieldQuery() calls, and not handling 
> (Multi)PhraseQuery in MFQP.getFieldQuery() can never cause trouble?
> The code in MFQP.getFieldQuery dates back to
> LUCENE-2605: Add classic QueryParser option setSplitOnWhitespace() to control 
> whether to split on whitespace prior to text analysis.  Default behavior 
> remains unchanged: split-on-whitespace=true.
> (06 Jul 2016), when it was substantially expanded.
> Best regards,
> Oliver
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9182) Test OOMs when ssl + clientAuth

2016-10-04 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9182:
---
Attachment: SOLR-9182.patch

[^SOLR-9182.patch] reproduces the "leak". 

java.lang.AssertionError: oh \[leased: 0; pending: 0; available: 5000; max: 
1] expected:<1> but was:<5000>
at ...
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.client.solrj.impl.HttpSolrClientSSLAuthConPoolTest.testPoolSize(HttpSolrClientSSLAuthConPoolTest.java:52)

It's worth to add more (probably concurrent) testing and hit few entpoints. 
it's questionable when to evict that context from cache, now it's happen on any 
exception, but there is no test coverage. Do "http routes" might mess something 
with those contexts and Principals??   

> Test OOMs when ssl + clientAuth
> ---
>
> Key: SOLR-9182
> URL: https://issues.apache.org/jira/browse/SOLR-9182
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: DistributedFacetPivotLongTailTest-heapintro.png, 
> SOLR-9182.patch, SOLR-9182.patch, SOLR-9182.patch
>
>
> the combination of SOLR-9028 fixing SSLTestConfig to actually pay attention 
> to clientAuth setting, and SOLR-9107 increasing the odds of ssl+clientAuth 
> being tested has helped surface some more tests that seem to fairly 
> consistently trigger OOM when running with SSL+clientAuth.
> I'm not sure if there is some underlying memory leak somewhere in the SSL 
> code we're using, or if this is just a factor of increased request/response 
> size when using (double) encrypted requests, but for now I'm just focusing on 
> opening a tracking issue for them and suppressing SSL in these cases with a 
> link here to clarify *why* we're suppressing SSL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7472) MultiFieldQueryParser.getFieldQuery() drops queries that are neither BooleanQuery nor TermQuery

2016-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546562#comment-15546562
 ] 

ASF subversion and git services commented on LUCENE-7472:
-

Commit 09e03c47c2c1842cbbd2b35bb698248737ba330d in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=09e03c4 ]

LUCENE-7472: remove unused import


> MultiFieldQueryParser.getFieldQuery() drops queries that are neither 
> BooleanQuery nor TermQuery 
> 
>
> Key: LUCENE-7472
> URL: https://issues.apache.org/jira/browse/LUCENE-7472
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.3, 6.2.2
>
> Attachments: LUCENE-7472.patch
>
>
> From 
> [http://mail-archives.apache.org/mod_mbox/lucene-java-user/201609.mbox/%3c944985a6ac27425681bd27abe9d90...@ska-wn-e132.ptvag.ptv.de%3e],
>  Oliver Kaleske reports:
> {quote}
> Hi,
> in updating Lucene from 6.1.0 to 6.2.0 I came across the following:
> We have a subclass of MultiFieldQueryParser (MFQP) for creating a custom type 
> of Query, which calls getFieldQuery() on its base class (MFQP).
> For each of its search fields, this method has a Query created by calling 
> getFieldQuery() on QueryParserBase.
> Ultimately, we wind up in QueryBuilder's createFieldQuery() method, which 
> depending on the number of tokens (etc.) decides what type of Query to 
> return: a TermQuery, BooleanQuery, PhraseQuery, or MultiPhraseQuery.
> Back in MFQP.getFieldQuery(), a variable maxTerms is determined depending on 
> the type of Query returned: for a TermQuery or a BooleanQuery, its value will 
> in general be nonzero, clauses are created, and a non-null Query is returned.
> However, other Query subclasses result in maxTerms=0, an empty list of 
> clauses, and finally null is returned.
> To me, this seems like a bug, but I might as well be missing something. The 
> comment "// happens for stopwords" on the return null statement, however, 
> seems to suggest that Query types other than TermQuery and BooleanQuery were 
> not considered properly here.
> I should point out that our custom MFQP subclass so far does some rather 
> unsophisticated tokenization before calling getFieldQuery() on each token, so 
> characters like '*' may still slip through. So perhaps with proper 
> tokenization, it is guaranteed that only TermQuery and BooleanQuery can come 
> out of the chain of getFieldQuery() calls, and not handling 
> (Multi)PhraseQuery in MFQP.getFieldQuery() can never cause trouble?
> The code in MFQP.getFieldQuery dates back to
> LUCENE-2605: Add classic QueryParser option setSplitOnWhitespace() to control 
> whether to split on whitespace prior to text analysis.  Default behavior 
> remains unchanged: split-on-whitespace=true.
> (06 Jul 2016), when it was substantially expanded.
> Best regards,
> Oliver
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7472) MultiFieldQueryParser.getFieldQuery() drops queries that are neither BooleanQuery nor TermQuery

2016-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546561#comment-15546561
 ] 

ASF subversion and git services commented on LUCENE-7472:
-

Commit 64ed2b6492f9d9218ab26550127c5c206f3e25b1 in lucene-solr's branch 
refs/heads/branch_6_2 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=64ed2b6 ]

LUCENE-7472: remove unused import


> MultiFieldQueryParser.getFieldQuery() drops queries that are neither 
> BooleanQuery nor TermQuery 
> 
>
> Key: LUCENE-7472
> URL: https://issues.apache.org/jira/browse/LUCENE-7472
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.3, 6.2.2
>
> Attachments: LUCENE-7472.patch
>
>
> From 
> [http://mail-archives.apache.org/mod_mbox/lucene-java-user/201609.mbox/%3c944985a6ac27425681bd27abe9d90...@ska-wn-e132.ptvag.ptv.de%3e],
>  Oliver Kaleske reports:
> {quote}
> Hi,
> in updating Lucene from 6.1.0 to 6.2.0 I came across the following:
> We have a subclass of MultiFieldQueryParser (MFQP) for creating a custom type 
> of Query, which calls getFieldQuery() on its base class (MFQP).
> For each of its search fields, this method has a Query created by calling 
> getFieldQuery() on QueryParserBase.
> Ultimately, we wind up in QueryBuilder's createFieldQuery() method, which 
> depending on the number of tokens (etc.) decides what type of Query to 
> return: a TermQuery, BooleanQuery, PhraseQuery, or MultiPhraseQuery.
> Back in MFQP.getFieldQuery(), a variable maxTerms is determined depending on 
> the type of Query returned: for a TermQuery or a BooleanQuery, its value will 
> in general be nonzero, clauses are created, and a non-null Query is returned.
> However, other Query subclasses result in maxTerms=0, an empty list of 
> clauses, and finally null is returned.
> To me, this seems like a bug, but I might as well be missing something. The 
> comment "// happens for stopwords" on the return null statement, however, 
> seems to suggest that Query types other than TermQuery and BooleanQuery were 
> not considered properly here.
> I should point out that our custom MFQP subclass so far does some rather 
> unsophisticated tokenization before calling getFieldQuery() on each token, so 
> characters like '*' may still slip through. So perhaps with proper 
> tokenization, it is guaranteed that only TermQuery and BooleanQuery can come 
> out of the chain of getFieldQuery() calls, and not handling 
> (Multi)PhraseQuery in MFQP.getFieldQuery() can never cause trouble?
> The code in MFQP.getFieldQuery dates back to
> LUCENE-2605: Add classic QueryParser option setSplitOnWhitespace() to control 
> whether to split on whitespace prior to text analysis.  Default behavior 
> remains unchanged: split-on-whitespace=true.
> (06 Jul 2016), when it was substantially expanded.
> Best regards,
> Oliver
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7438) UnifiedHighlighter

2016-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546521#comment-15546521
 ] 

ASF subversion and git services commented on LUCENE-7438:
-

Commit 722e82712435ecf46c9868137d885484152f749b in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=722e827 ]

LUCENE-7438: New UnifiedHighlighter


> UnifiedHighlighter
> --
>
> Key: LUCENE-7438
> URL: https://issues.apache.org/jira/browse/LUCENE-7438
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Affects Versions: 6.2
>Reporter: Timothy M. Rodriguez
>Assignee: David Smiley
> Attachments: LUCENE-7438.patch, LUCENE_7438_UH_benchmark.patch, 
> LUCENE_7438_UH_small_changes.patch
>
>
> The UnifiedHighlighter is an evolution of the PostingsHighlighter that is 
> able to highlight using offsets in either postings, term vectors, or from 
> analysis (a TokenStream). Lucene’s existing highlighters are mostly 
> demarcated along offset source lines, whereas here it is unified -- hence 
> this proposed name. In this highlighter, the offset source strategy is 
> separated from the core highlighting functionalty. The UnifiedHighlighter 
> further improves on the PostingsHighlighter’s design by supporting accurate 
> phrase highlighting using an approach similar to the standard highlighter’s 
> WeightedSpanTermExtractor. The next major improvement is a hybrid offset 
> source strategythat utilizes postings and “light” term vectors (i.e. just the 
> terms) for highlighting multi-term queries (wildcards) without resorting to 
> analysis. Phrase highlighting and wildcard highlighting can both be disabled 
> if you’d rather highlight a little faster albeit not as accurately reflecting 
> the query.
> We’ve benchmarked an earlier version of this highlighter comparing it to the 
> other highlighters and the results were exciting! It’s tempting to share 
> those results but it’s definitely due for another benchmark, so we’ll work on 
> that. Performance was the main motivator for creating the UnifiedHighlighter, 
> as the standard Highlighter (the only one meeting Bloomberg Law’s accuracy 
> requirements) wasn’t fast enough, even with term vectors along with several 
> improvements we contributed back, and even after we forked it to highlight in 
> multiple threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 17970 - Unstable!

2016-10-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17970/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"/cn_j/d", "path":"/test1", 
"httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":null},  from 
server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val' for path 'x' 
full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"/cn_j/d",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":null},  from server:  null
at 
__randomizedtesting.SeedInfo.seed([3CF848B880947342:E4B565EF7749D6E2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:535)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:232)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7826) Permission issues when creating cores with bin/solr as root user

2016-10-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546431#comment-15546431
 ] 

Jan Høydahl commented on SOLR-7826:
---

I think I'll leave things as they are for now. Same-user policy for 7.0 sounds 
ok. Perhaps see where the breadcrumbs effort in SOLR-9590 leads us, but that 
will anyway not help for manual installs.

One idea could be to determine whether Solr has been started before or not, 
i.e. by looking for a file that is always created by the Solr process, such as 
$SOLR_LOGS_DIR/solr.log or $SOLR_HOME//data/index, and require 
*that* user. If Solr has not been started before, let the start command succeed 
as any user, but test first that the user has write access to both SOLR_HOME 
and SOLR_LOGS_DIR?

> Permission issues when creating cores with bin/solr as root user
> 
>
> Key: SOLR-7826
> URL: https://issues.apache.org/jira/browse/SOLR-7826
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: newdev
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-7826.patch, SOLR-7826.patch, 
> SOLR-7826_sameuser.patch
>
>
> Ran into an interesting situation on IRC today.
> Solr has been installed as a service using the shell script 
> install_solr_service.sh ... so it is running as an unprivileged user.
> User is running "bin/solr create" as root.  This causes permission problems, 
> because the script creates the core's instanceDir with root ownership, then 
> when Solr is instructed to actually create the core, it cannot create the 
> dataDir.
> Enhancement idea:  When the install script is used, leave breadcrumbs 
> somewhere so that the "create core" section of the main script can find it 
> and su to the user specified during install.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #89: Branch 6 2

2016-10-04 Thread selvarajy
Github user selvarajy closed the pull request at:

https://github.com/apache/lucene-solr/pull/89


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #89: Branch 6 2

2016-10-04 Thread selvarajy
GitHub user selvarajy opened a pull request:

https://github.com/apache/lucene-solr/pull/89

Branch 6 2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/lucene-solr branch_6_2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/89.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #89






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-master - Build # 591 - Failure

2016-10-04 Thread Michael McCandless
Ugh, thanks Adrien.

I had fixed this after "git cherry-pick" but forgot to "git add" my
fixes before pushing!!  Sigh.

Mike McCandless

http://blog.mikemccandless.com


On Tue, Oct 4, 2016 at 1:15 PM, Adrien Grand  wrote:
> I pushed a fix, there were issues with unused imports due to the
> queryparsing changes and doc values tests that were using the old
> Iterable-based API.
>
> Le mar. 4 oct. 2016 à 18:03, Apache Jenkins Server
>  a écrit :
>>
>> Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/591/
>>
>> No tests ran.
>>
>> Build Log:
>> [...truncated 264 lines...]
>> [javac] Compiling 441 source files to
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/core/classes/test
>> [javac]
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/lucene54/TestLucene54DocValuesFormat.java:474:
>> warning: [cast] redundant cast to int
>> [javac] final int target = TestUtil.nextInt(random(), 0, (int)
>> maxDoc);
>> [javac]  ^
>> [javac]
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/lucene70/TestLucene70DocValuesFormat.java:475:
>> warning: [cast] redundant cast to int
>> [javac] final int target = TestUtil.nextInt(random(), 0, (int)
>> maxDoc);
>> [javac]  ^
>> [javac]
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:205:
>> error: > org.apache.lucene.codecs.perfield.TestPerFieldDocValuesFormat$MergeRecordingDocValueFormatWrapper$1>
>> is not abstract and does not override abstract method
>> addSortedSetField(FieldInfo,DocValuesProducer) in DocValuesConsumer
>> [javac]   return new DocValuesConsumer() {
>> [javac]  ^
>> [javac]
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:206:
>> error: method does not override or implement a method from a supertype
>> [javac] @Override
>> [javac] ^
>> [javac]
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:208:
>> error: incompatible types: Iterable cannot be converted to
>> DocValuesProducer
>> [javac]   consumer.addNumericField(field, values);
>> [javac]   ^
>> [javac]
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:211:
>> error: method does not override or implement a method from a supertype
>> [javac] @Override
>> [javac] ^
>> [javac]
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:213:
>> error: incompatible types: Iterable cannot be converted to
>> DocValuesProducer
>> [javac]   consumer.addBinaryField(field, values);
>> [javac]  ^
>> [javac]
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:216:
>> error: method does not override or implement a method from a supertype
>> [javac] @Override
>> [javac] ^
>> [javac]
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:218:
>> error: method addSortedField in class DocValuesConsumer cannot be applied to
>> given types;
>> [javac]   consumer.addSortedField(field, values, docToOrd);
>> [javac]   ^
>> [javac]   required: FieldInfo,DocValuesProducer
>> [javac]   found: FieldInfo,Iterable,Iterable
>> [javac]   reason: actual and formal argument lists differ in length
>> [javac]
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:221:
>> error: method does not override or implement a method from a supertype
>> [javac] @Override
>> [javac] ^
>> [javac]
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:223:
>> error: method addSortedNumericField in class DocValuesConsumer cannot be
>> applied to given types;
>> [javac]   

[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 451 - Failure!

2016-10-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/451/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 53602 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj2078272798
 [ecj-lint] Compiling 232 source files to 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj2078272798
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/queryparser/src/java/org/apache/lucene/queryparser/classic/MultiFieldQueryParser.java
 (at line 30)
 [ecj-lint] import org.apache.lucene.search.TermQuery;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.lucene.search.TermQuery is never used
 [ecj-lint] --
 [ecj-lint] 1 problem (1 error)

BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/build.xml:763: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/build.xml:101: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build.xml:204: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/common-build.xml:2177: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/common-build.xml:1992: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/common-build.xml:2031: 
Compile failed; see the compiler error output for details.

Total time: 116 minutes 27 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-10-04 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513607#comment-15513607
 ] 

Kevin Risden edited comment on SOLR-8593 at 10/4/16 6:31 PM:
-

Adding some resources that may be helpful:
* http://www.slideshare.net/HadoopSummit/costbased-query-optimization
* 
https://medium.com/@mpathirage/query-planning-with-apache-calcite-part-1-fe957b011c36#.ywd9ouxmv
* http://www.slideshare.net/JordanHalterman/introduction-to-apache-calcite


was (Author: risdenk):
Adding some resources that may be helpful:
* http://www.slideshare.net/HadoopSummit/costbased-query-optimization
* 
https://medium.com/@mpathirage/query-planning-with-apache-calcite-part-1-fe957b011c36#.ywd9ouxmv

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 464 - Still Failing

2016-10-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/464/

1 tests failed.
FAILED:  
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeRapidAdds

Error Message:
1: hard occurred too fast: 1053 < (1200 * 1)

Stack Trace:
java.lang.AssertionError: 1: hard occurred too fast: 1053 < (1200 * 1)
at 
__randomizedtesting.SeedInfo.seed([6AEA7D976DA6D6C7:36FFD3AE862497BF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeRapidAdds(SoftAutoCommitTest.java:344)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11814 lines...]
   [junit4] Suite: org.apache.solr.update.SoftAutoCommitTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-3561) Error during deletion of shard/core

2016-10-04 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15546190#comment-15546190
 ] 

Per Steffensen commented on SOLR-3561:
--

I originally created the ticket. I am not against closing it. I do not know if 
the problem still exists (in some shape), but a lot of things has changed 
since, so someone will have to bring up the problem again if it is still a 
problem.

> Error during deletion of shard/core
> ---
>
> Key: SOLR-3561
> URL: https://issues.apache.org/jira/browse/SOLR-3561
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, replication (java), SolrCloud
>Affects Versions: 4.0-ALPHA
> Environment: Solr trunk (4.0-SNAPSHOT) from 29/2-2012
>Reporter: Per Steffensen
>Assignee: Mark Miller
> Fix For: 4.9, 6.0
>
>
> Running several Solr servers in Cloud-cluster (zkHost set on the Solr 
> servers).
> Several collections with several slices and one replica for each slice (each 
> slice has two shards)
> Basically we want let our system delete an entire collection. We do this by 
> trying to delete each and every shard under the collection. Each shard is 
> deleted one by one, by doing CoreAdmin-UNLOAD-requests against the relevant 
> Solr
> {code}
> CoreAdminRequest request = new CoreAdminRequest();
> request.setAction(CoreAdminAction.UNLOAD);
> request.setCoreName(shardName);
> CoreAdminResponse resp = request.process(new CommonsHttpSolrServer(solrUrl));
> {code}
> The delete/unload succeeds, but in like 10% of the cases we get errors on 
> involved Solr servers, right around the time where shard/cores are deleted, 
> and we end up in a situation where ZK still claims (forever) that the deleted 
> shard is still present and active.
> Form here the issue is easilier explained by a more concrete example:
> - 7 Solr servers involved
> - Several collection a.o. one called "collection_2012_04", consisting of 28 
> slices, 56 shards (remember 1 replica for each slice) named 
> "collection_2012_04_sliceX_shardY" for all pairs in {X:1..28}x{Y:1,2}
> - Each Solr server running 8 shards, e.g Solr server #1 is running shard 
> "collection_2012_04_slice1_shard1" and Solr server #7 is running shard 
> "collection_2012_04_slice1_shard2" belonging to the same slice "slice1".
> When we decide to delete the collection "collection_2012_04" we go through 
> all 56 shards and delete/unload them one-by-one - including 
> "collection_2012_04_slice1_shard1" and "collection_2012_04_slice1_shard2". At 
> some point during or shortly after all this deletion we see the following 
> exceptions in solr.log on Solr server #7
> {code}
> Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
> SEVERE: Error while trying to recover:org.apache.solr.common.SolrException: 
> core not found:collection_2012_04_slice1_shard1
> request: 
> http://solr_server_1:8983/solr/admin/cores?action=PREPRECOVERY=collection_2012_04_slice1_shard1=solr_server_7%3A8983_solr=solr_server_7%3A8983_solr_collection_2012_04_slice1_shard2=recovering=true=6000=javabin=2
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.solr.common.SolrExceptionPropagationHelper.decodeFromMsg(SolrExceptionPropagationHelper.java:29)
> at 
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:445)
> at 
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:264)
> at 
> org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:188)
> at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:285)
> at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:206)
> Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
> SEVERE: Recovery failed - trying again...
> Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process
> WARNING:
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
> at java.util.ArrayList.RangeCheck(ArrayList.java:547)
> at java.util.ArrayList.get(ArrayList.java:322)
> at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:96)
> at org.apache.solr.cloud.LeaderElector.access$000(LeaderElector.java:57)
> at org.apache.solr.cloud.LeaderElector$1.process(LeaderElector.java:121)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:507)
> Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process
> {code}
> Im 

[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 1871 - Still Failing!

2016-10-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1871/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.spelling.SpellCheckCollatorTest.testEstimatedHitCounts

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([BA25071DCD6D1B6F:8B9EB92868520BBF]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:813)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:780)
at 
org.apache.solr.spelling.SpellCheckCollatorTest.testEstimatedHitCounts(SpellCheckCollatorTest.java:562)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//lst[@name='spellcheck']/lst[@name='collations']/lst[@name='collation']/int[@name='hits'
 and 6 <= . and . <= 10]
xml response was: 


[jira] [Created] (SOLR-9597) Add setReadOnly(String ...) to ConnectionImpl

2016-10-04 Thread Kevin Risden (JIRA)
Kevin Risden created SOLR-9597:
--

 Summary: Add setReadOnly(String ...) to ConnectionImpl
 Key: SOLR-9597
 URL: https://issues.apache.org/jira/browse/SOLR-9597
 Project: Solr
  Issue Type: Sub-task
  Components: SolrJ
Affects Versions: 6.2.1
Reporter: Kevin Risden
Assignee: Kevin Risden
Priority: Minor


When using OpenLink ODBC-JDBC bridge on Windows, it tries to run the method 
ConnectionImpl.setReadOnly(String ...). The spec says that setReadOnly(boolean 
...) is required. This causes the bridge to fail on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7474) Improve doc values writers

2016-10-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7474.
--
   Resolution: Fixed
Fix Version/s: master (7.0)

> Improve doc values writers
> --
>
> Key: LUCENE-7474
> URL: https://issues.apache.org/jira/browse/LUCENE-7474
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: LUCENE-7474.patch
>
>
> One of the goals of the new iterator-based API is to better handle sparse 
> data. However, the current doc values writers still use a dense 
> representation, and some of them perform naive linear scans in the nextDoc 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-master - Build # 591 - Failure

2016-10-04 Thread Adrien Grand
I pushed a fix, there were issues with unused imports due to the
queryparsing changes and doc values tests that were using the old
Iterable-based API.

Le mar. 4 oct. 2016 à 18:03, Apache Jenkins Server <
jenk...@builds.apache.org> a écrit :

> Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/591/
>
> No tests ran.
>
> Build Log:
> [...truncated 264 lines...]
> [javac] Compiling 441 source files to
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/core/classes/test
> [javac]
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/lucene54/TestLucene54DocValuesFormat.java:474:
> warning: [cast] redundant cast to int
> [javac] final int target = TestUtil.nextInt(random(), 0, (int)
> maxDoc);
> [javac]  ^
> [javac]
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/lucene70/TestLucene70DocValuesFormat.java:475:
> warning: [cast] redundant cast to int
> [javac] final int target = TestUtil.nextInt(random(), 0, (int)
> maxDoc);
> [javac]  ^
> [javac]
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:205:
> error:  org.apache.lucene.codecs.perfield.TestPerFieldDocValuesFormat$MergeRecordingDocValueFormatWrapper$1>
> is not abstract and does not override abstract method
> addSortedSetField(FieldInfo,DocValuesProducer) in DocValuesConsumer
> [javac]   return new DocValuesConsumer() {
> [javac]  ^
> [javac]
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:206:
> error: method does not override or implement a method from a supertype
> [javac] @Override
> [javac] ^
> [javac]
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:208:
> error: incompatible types: Iterable cannot be converted to
> DocValuesProducer
> [javac]   consumer.addNumericField(field, values);
> [javac]   ^
> [javac]
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:211:
> error: method does not override or implement a method from a supertype
> [javac] @Override
> [javac] ^
> [javac]
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:213:
> error: incompatible types: Iterable cannot be converted to
> DocValuesProducer
> [javac]   consumer.addBinaryField(field, values);
> [javac]  ^
> [javac]
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:216:
> error: method does not override or implement a method from a supertype
> [javac] @Override
> [javac] ^
> [javac]
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:218:
> error: method addSortedField in class DocValuesConsumer cannot be applied
> to given types;
> [javac]   consumer.addSortedField(field, values, docToOrd);
> [javac]   ^
> [javac]   required: FieldInfo,DocValuesProducer
> [javac]   found: FieldInfo,Iterable,Iterable
> [javac]   reason: actual and formal argument lists differ in length
> [javac]
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:221:
> error: method does not override or implement a method from a supertype
> [javac] @Override
> [javac] ^
> [javac]
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:223:
> error: method addSortedNumericField in class DocValuesConsumer cannot be
> applied to given types;
> [javac]   consumer.addSortedNumericField(field,
> docToValueCount, values);
> [javac]   ^
> [javac]   required: FieldInfo,DocValuesProducer
> [javac]   found: FieldInfo,Iterable,Iterable
> [javac]   reason: actual and formal argument lists differ in length
> [javac]
> 

[jira] [Commented] (LUCENE-7474) Improve doc values writers

2016-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545988#comment-15545988
 ] 

ASF subversion and git services commented on LUCENE-7474:
-

Commit d50cf97617c88ec75fd8f4482003623db08e625e in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d50cf97 ]

LUCENE-7474: Doc values writers should have a sparse encoding.


> Improve doc values writers
> --
>
> Key: LUCENE-7474
> URL: https://issues.apache.org/jira/browse/LUCENE-7474
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7474.patch
>
>
> One of the goals of the new iterator-based API is to better handle sparse 
> data. However, the current doc values writers still use a dense 
> representation, and some of them perform naive linear scans in the nextDoc 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7826) Permission issues when creating cores with bin/solr as root user

2016-10-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545946#comment-15545946
 ] 

Hoss Man commented on SOLR-7826:


1. I love your new AssertTool code
2. ...

bq. But then should it not be allowed to create SOLR_HOME by hand as another 
user, and then make sure that the solr user has full access through its group 
memberships? Or equivalent ACL rights for Windows? Seems potentially more 
trappy than the root check...

That's a good point ... I feel like enforcing that the same user be used every 
where is the lesser of the evils -- but only if we had been doing that since 
day #1 in {{bin/solr}}.  If we start enforcing that now that might screw people 
with existing installs like you describe.

I honestly don't know how i feel about this issue anymore.

Maybe we should just stick with "only root is special / prohibited" behavior 
for now (either using the code you already committed, or your new AssertTool 
code) and consider more restrictive "use the same user everywhere, but 
{{-force}} will " let you use any user" type logic in 7.0?

> Permission issues when creating cores with bin/solr as root user
> 
>
> Key: SOLR-7826
> URL: https://issues.apache.org/jira/browse/SOLR-7826
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: newdev
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-7826.patch, SOLR-7826.patch, 
> SOLR-7826_sameuser.patch
>
>
> Ran into an interesting situation on IRC today.
> Solr has been installed as a service using the shell script 
> install_solr_service.sh ... so it is running as an unprivileged user.
> User is running "bin/solr create" as root.  This causes permission problems, 
> because the script creates the core's instanceDir with root ownership, then 
> when Solr is instructed to actually create the core, it cannot create the 
> dataDir.
> Enhancement idea:  When the install script is used, leave breadcrumbs 
> somewhere so that the "create core" section of the main script can find it 
> and su to the user specified during install.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9596) stopped working in Solr 6.2

2016-10-04 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-9596:
-
Description: 
As a result of changes introduced in Lucene 6.2 by LUCENE-7323, 
SimpleTextCodec's postings and doc values formats can only be used from 
SimpleTextCodec.  That means that Solr's default codecFactory 
SchemaCodecFactory, which enables per-field specification of postings and doc 
values formats by extending LuceneXXCodec to pull per-field specification from 
the schema, can't be used with SimpleText postings and doc values formats.

What Solr could instead do is provide a non-schema-aware SimpleTextCodecFactory.

  was:
As a result of changes introduced in Lucene 6.2 by LUCENE-7323, 
SimpleTextCodec's postings and doc values formats can only be used from 
SimpleTextCodec.  That means that Solr's default codecFactory 
SchemaCodecFactory, which enables per-field specification of postings and doc 
values formats by extending LuceneXXCodecFactory to pull per-field 
specification from the schema, can't be used with SimpleText postings and doc 
values formats.

What Solr could instead do is provide a non-schema-aware SimpleTextCodecFactory.


>  stopped working in 
> Solr 6.2
> ---
>
> Key: SOLR-9596
> URL: https://issues.apache.org/jira/browse/SOLR-9596
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
> Attachments: SOLR-9596.patch
>
>
> As a result of changes introduced in Lucene 6.2 by LUCENE-7323, 
> SimpleTextCodec's postings and doc values formats can only be used from 
> SimpleTextCodec.  That means that Solr's default codecFactory 
> SchemaCodecFactory, which enables per-field specification of postings and doc 
> values formats by extending LuceneXXCodec to pull per-field specification 
> from the schema, can't be used with SimpleText postings and doc values 
> formats.
> What Solr could instead do is provide a non-schema-aware 
> SimpleTextCodecFactory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9596) stopped working in Solr 6.2

2016-10-04 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-9596:
-
Attachment: SOLR-9596.patch

Patch, contains a dead-simple SimpleTextCodecFactory for Solr.

>  stopped working in 
> Solr 6.2
> ---
>
> Key: SOLR-9596
> URL: https://issues.apache.org/jira/browse/SOLR-9596
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Minor
> Attachments: SOLR-9596.patch
>
>
> As a result of changes introduced in Lucene 6.2 by LUCENE-7323, 
> SimpleTextCodec's postings and doc values formats can only be used from 
> SimpleTextCodec.  That means that Solr's default codecFactory 
> SchemaCodecFactory, which enables per-field specification of postings and doc 
> values formats by extending LuceneXXCodecFactory to pull per-field 
> specification from the schema, can't be used with SimpleText postings and doc 
> values formats.
> What Solr could instead do is provide a non-schema-aware 
> SimpleTextCodecFactory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9596) stopped working in Solr 6.2

2016-10-04 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-9596:


 Summary:  
stopped working in Solr 6.2
 Key: SOLR-9596
 URL: https://issues.apache.org/jira/browse/SOLR-9596
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Steve Rowe
Priority: Minor


As a result of changes introduced in Lucene 6.2 by LUCENE-7323, 
SimpleTextCodec's postings and doc values formats can only be used from 
SimpleTextCodec.  That means that Solr's default codecFactory 
SchemaCodecFactory, which enables per-field specification of postings and doc 
values formats by extending LuceneXXCodecFactory to pull per-field 
specification from the schema, can't be used with SimpleText postings and doc 
values formats.

What Solr could instead do is provide a non-schema-aware SimpleTextCodecFactory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 591 - Failure

2016-10-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/591/

No tests ran.

Build Log:
[...truncated 264 lines...]
[javac] Compiling 441 source files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/core/classes/test
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/lucene54/TestLucene54DocValuesFormat.java:474:
 warning: [cast] redundant cast to int
[javac] final int target = TestUtil.nextInt(random(), 0, (int) 
maxDoc);
[javac]  ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/lucene70/TestLucene70DocValuesFormat.java:475:
 warning: [cast] redundant cast to int
[javac] final int target = TestUtil.nextInt(random(), 0, (int) 
maxDoc);
[javac]  ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:205:
 error: 
 is not abstract and does not override abstract method 
addSortedSetField(FieldInfo,DocValuesProducer) in DocValuesConsumer
[javac]   return new DocValuesConsumer() {
[javac]  ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:206:
 error: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:208:
 error: incompatible types: Iterable cannot be converted to 
DocValuesProducer
[javac]   consumer.addNumericField(field, values);
[javac]   ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:211:
 error: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:213:
 error: incompatible types: Iterable cannot be converted to 
DocValuesProducer
[javac]   consumer.addBinaryField(field, values);
[javac]  ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:216:
 error: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:218:
 error: method addSortedField in class DocValuesConsumer cannot be applied to 
given types;
[javac]   consumer.addSortedField(field, values, docToOrd);
[javac]   ^
[javac]   required: FieldInfo,DocValuesProducer
[javac]   found: FieldInfo,Iterable,Iterable
[javac]   reason: actual and formal argument lists differ in length
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:221:
 error: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:223:
 error: method addSortedNumericField in class DocValuesConsumer cannot be 
applied to given types;
[javac]   consumer.addSortedNumericField(field, docToValueCount, 
values);
[javac]   ^
[javac]   required: FieldInfo,DocValuesProducer
[javac]   found: FieldInfo,Iterable,Iterable
[javac]   reason: actual and formal argument lists differ in length
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:226:
 error: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:228:
 error: method addSortedSetField in class DocValuesConsumer cannot 

[JENKINS] Lucene-Solr-Tests-master - Build # 1405 - Still Failing

2016-10-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1405/

No tests ran.

Build Log:
[...truncated 196 lines...]
[javac] Compiling 441 source files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/core/classes/test
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/core/src/test/org/apache/lucene/codecs/lucene54/TestLucene54DocValuesFormat.java:474:
 warning: [cast] redundant cast to int
[javac] final int target = TestUtil.nextInt(random(), 0, (int) 
maxDoc);
[javac]  ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/core/src/test/org/apache/lucene/codecs/lucene70/TestLucene70DocValuesFormat.java:475:
 warning: [cast] redundant cast to int
[javac] final int target = TestUtil.nextInt(random(), 0, (int) 
maxDoc);
[javac]  ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:205:
 error: 
 is not abstract and does not override abstract method 
addSortedSetField(FieldInfo,DocValuesProducer) in DocValuesConsumer
[javac]   return new DocValuesConsumer() {
[javac]  ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:206:
 error: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:208:
 error: incompatible types: Iterable cannot be converted to 
DocValuesProducer
[javac]   consumer.addNumericField(field, values);
[javac]   ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:211:
 error: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:213:
 error: incompatible types: Iterable cannot be converted to 
DocValuesProducer
[javac]   consumer.addBinaryField(field, values);
[javac]  ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:216:
 error: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:218:
 error: method addSortedField in class DocValuesConsumer cannot be applied to 
given types;
[javac]   consumer.addSortedField(field, values, docToOrd);
[javac]   ^
[javac]   required: FieldInfo,DocValuesProducer
[javac]   found: FieldInfo,Iterable,Iterable
[javac]   reason: actual and formal argument lists differ in length
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:221:
 error: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:223:
 error: method addSortedNumericField in class DocValuesConsumer cannot be 
applied to given types;
[javac]   consumer.addSortedNumericField(field, docToValueCount, 
values);
[javac]   ^
[javac]   required: FieldInfo,DocValuesProducer
[javac]   found: FieldInfo,Iterable,Iterable
[javac]   reason: actual and formal argument lists differ in length
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:226:
 error: method does not override or implement a method from a supertype
[javac] @Override
[javac] ^
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/core/src/test/org/apache/lucene/codecs/perfield/TestPerFieldDocValuesFormat.java:228:
 error: method addSortedSetField in class DocValuesConsumer cannot be applied 
to given types;
[javac]   consumer.addSortedSetField(field, values, 

[jira] [Closed] (SOLR-8826) SolrJ JDBC - ODBC-JDBC bridge documentation

2016-10-04 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden closed SOLR-8826.
--
Resolution: Information Provided
  Assignee: Kevin Risden

> SolrJ JDBC - ODBC-JDBC bridge documentation
> ---
>
> Key: SOLR-8826
> URL: https://issues.apache.org/jira/browse/SOLR-8826
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation, SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>
> Integrating SolrJ JDBC with an ODBC-JDBC bridge will use useful for software 
> like Excel/Tableau/etc. This should be documented on how to set it up.
> 1. Setup ODBC-JDBC bridge according to vendor instructions
> 2. 
> https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface#ParallelSQLInterface-Generic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8826) SolrJ JDBC - ODBC-JDBC bridge documentation

2016-10-04 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8826:
---
Description: 
Integrating SolrJ JDBC with an ODBC-JDBC bridge will use useful for software 
like Excel/Tableau/etc. This should be documented on how to set it up.

1. Setup ODBC-JDBC bridge according to vendor instructions
2. 
https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface#ParallelSQLInterface-Generic

  was:Integrating SolrJ JDBC with an ODBC-JDBC bridge will use useful for 
software like Excel/Tableau/etc. This should be documented on how to set it up.


> SolrJ JDBC - ODBC-JDBC bridge documentation
> ---
>
> Key: SOLR-8826
> URL: https://issues.apache.org/jira/browse/SOLR-8826
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation, SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>
> Integrating SolrJ JDBC with an ODBC-JDBC bridge will use useful for software 
> like Excel/Tableau/etc. This should be documented on how to set it up.
> 1. Setup ODBC-JDBC bridge according to vendor instructions
> 2. 
> https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface#ParallelSQLInterface-Generic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7456) PerField(DocValues|Postings)Format do not call the per-field merge methods

2016-10-04 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7456.

Resolution: Fixed

Thank you [~jmassenet-rakuten]!

> PerField(DocValues|Postings)Format do not call the per-field merge methods
> --
>
> Key: LUCENE-7456
> URL: https://issues.apache.org/jira/browse/LUCENE-7456
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 6.2.1
>Reporter: Julien MASSENET
> Attachments: LUCENE-7456-v2.patch, LUCENE-7456.patch
>
>
> While porting some old codec code from Lucene 4.3.1, I couldn't get the 
> per-field formats to call upon the per-field merge methods; the default merge 
> method was always being called.
> I think this is a side-effect of LUCENE-5894.
> Attached is a patch with a test that reproduces the error and an associated 
> fix that pass the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7456) PerField(DocValues|Postings)Format do not call the per-field merge methods

2016-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545803#comment-15545803
 ] 

ASF subversion and git services commented on LUCENE-7456:
-

Commit 796ed508f39683c626d4870a7ab583a222b2c64c in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=796ed50 ]

LUCENE-7456: PerFieldPostings/DocValuesFormat was failing to delegate the merge 
method


> PerField(DocValues|Postings)Format do not call the per-field merge methods
> --
>
> Key: LUCENE-7456
> URL: https://issues.apache.org/jira/browse/LUCENE-7456
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 6.2.1
>Reporter: Julien MASSENET
> Attachments: LUCENE-7456-v2.patch, LUCENE-7456.patch
>
>
> While porting some old codec code from Lucene 4.3.1, I couldn't get the 
> per-field formats to call upon the per-field merge methods; the default merge 
> method was always being called.
> I think this is a side-effect of LUCENE-5894.
> Attached is a patch with a test that reproduces the error and an associated 
> fix that pass the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8826) SolrJ JDBC - ODBC-JDBC bridge documentation

2016-10-04 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545802#comment-15545802
 ] 

Kevin Risden commented on SOLR-8826:


Using the OpenLink ODBC-JDBC driver with Mac and Windows seems to work. I have 
some details here: 
https://github.com/risdenk/solrj-jdbc-testing/blob/master/odbc/README.md

Not sure if makes sense to plug an specific ODBC-JDBC bridge companies in the 
official Solr documentation. Here are the generic steps:

1. Setup ODBC-JDBC bridge according to vendor instructions
2. 
https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface#ParallelSQLInterface-Generic

> SolrJ JDBC - ODBC-JDBC bridge documentation
> ---
>
> Key: SOLR-8826
> URL: https://issues.apache.org/jira/browse/SOLR-8826
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation, SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>
> Integrating SolrJ JDBC with an ODBC-JDBC bridge will use useful for software 
> like Excel/Tableau/etc. This should be documented on how to set it up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9012) SolrJ JDBC - Ensure that an ODBC-JDBC bridge works with SolrJ JDBC

2016-10-04 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-9012:
--

Assignee: Kevin Risden

> SolrJ JDBC - Ensure that an ODBC-JDBC bridge works with SolrJ JDBC
> --
>
> Key: SOLR-9012
> URL: https://issues.apache.org/jira/browse/SOLR-9012
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>
> There are a few ODBC-JDBC bridges out there. This is useful to be able to 
> hook up tools like Tableau to work with Solr. The ODBC-JDBC bridge will 
> require supporting Java 8. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9012) SolrJ JDBC - Ensure that an ODBC-JDBC bridge works with SolrJ JDBC

2016-10-04 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden closed SOLR-9012.
--
Resolution: Information Provided

> SolrJ JDBC - Ensure that an ODBC-JDBC bridge works with SolrJ JDBC
> --
>
> Key: SOLR-9012
> URL: https://issues.apache.org/jira/browse/SOLR-9012
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>
> There are a few ODBC-JDBC bridges out there. This is useful to be able to 
> hook up tools like Tableau to work with Solr. The ODBC-JDBC bridge will 
> require supporting Java 8. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9012) SolrJ JDBC - Ensure that an ODBC-JDBC bridge works with SolrJ JDBC

2016-10-04 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545797#comment-15545797
 ] 

Kevin Risden commented on SOLR-9012:


So great news with the OpenLink ODBC-JDBC bridge. This works as expected on Mac 
and on Windows there is only a slight issue that I think is the bridge itself 
(have a ticket open with OpenLink about it). The ODBC-JDBC bridge works just 
need the remaining items like "select *" to be supported.

> SolrJ JDBC - Ensure that an ODBC-JDBC bridge works with SolrJ JDBC
> --
>
> Key: SOLR-9012
> URL: https://issues.apache.org/jira/browse/SOLR-9012
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>
> There are a few ODBC-JDBC bridges out there. This is useful to be able to 
> hook up tools like Tableau to work with Solr. The ODBC-JDBC bridge will 
> require supporting Java 8. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7474) Improve doc values writers

2016-10-04 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545780#comment-15545780
 ] 

Michael McCandless commented on LUCENE-7474:


+1, wonderful.

> Improve doc values writers
> --
>
> Key: LUCENE-7474
> URL: https://issues.apache.org/jira/browse/LUCENE-7474
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7474.patch
>
>
> One of the goals of the new iterator-based API is to better handle sparse 
> data. However, the current doc values writers still use a dense 
> representation, and some of them perform naive linear scans in the nextDoc 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 463 - Still Failing

2016-10-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/463/

3 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([B02B01C054FBF51D]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZkTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([B02B01C054FBF51D]:0)


FAILED:  org.apache.solr.cloud.CdcrVersionReplicationTest.testCdcrDocVersions

Error Message:


Stack Trace:
org.apache.solr.common.cloud.ZooKeeperException: 
at 
__randomizedtesting.SeedInfo.seed([B02B01C054FBF51D:48BD0A62A69D1A01]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:576)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForCollectionToDisappear(BaseCdcrDistributedZkTest.java:494)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.startServers(BaseCdcrDistributedZkTest.java:596)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.createSourceCollection(BaseCdcrDistributedZkTest.java:346)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.baseBefore(BaseCdcrDistributedZkTest.java:168)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:905)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7456) PerField(DocValues|Postings)Format do not call the per-field merge methods

2016-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545667#comment-15545667
 ] 

ASF subversion and git services commented on LUCENE-7456:
-

Commit a6a8032c7f079ea59daea0c95e48f69b2986d918 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a6a8032 ]

LUCENE-7456: PerFieldPostings/DocValuesFormat was failing to delegate the merge 
method


> PerField(DocValues|Postings)Format do not call the per-field merge methods
> --
>
> Key: LUCENE-7456
> URL: https://issues.apache.org/jira/browse/LUCENE-7456
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 6.2.1
>Reporter: Julien MASSENET
> Attachments: LUCENE-7456-v2.patch, LUCENE-7456.patch
>
>
> While porting some old codec code from Lucene 4.3.1, I couldn't get the 
> per-field formats to call upon the per-field merge methods; the default merge 
> method was always being called.
> I think this is a side-effect of LUCENE-5894.
> Attached is a patch with a test that reproduces the error and an associated 
> fix that pass the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7456) PerField(DocValues|Postings)Format do not call the per-field merge methods

2016-10-04 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545657#comment-15545657
 ] 

Michael McCandless commented on LUCENE-7456:


Thanks [~jmassenet-rakuten], I think this patch is a good step forwards, and we 
can try to simplify the approach in future issues (progress not perfection!).

I'll fixup the minor failures from {{ant precommit}} and push.

> PerField(DocValues|Postings)Format do not call the per-field merge methods
> --
>
> Key: LUCENE-7456
> URL: https://issues.apache.org/jira/browse/LUCENE-7456
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 6.2.1
>Reporter: Julien MASSENET
> Attachments: LUCENE-7456-v2.patch, LUCENE-7456.patch
>
>
> While porting some old codec code from Lucene 4.3.1, I couldn't get the 
> per-field formats to call upon the per-field merge methods; the default merge 
> method was always being called.
> I think this is a side-effect of LUCENE-5894.
> Attached is a patch with a test that reproduces the error and an associated 
> fix that pass the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7474) Improve doc values writers

2016-10-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7474:
-
Attachment: LUCENE-7474.patch

Here is a patch. Writers now only store actual values (not placeholders for 
documents that do not have a value) and documents that have a value for the 
field are encoded using a FixedBitSet. While this is still technically linear, 
this should be significantly faster in the sparse case since many documents can 
be skipped at once.

> Improve doc values writers
> --
>
> Key: LUCENE-7474
> URL: https://issues.apache.org/jira/browse/LUCENE-7474
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7474.patch
>
>
> One of the goals of the new iterator-based API is to better handle sparse 
> data. However, the current doc values writers still use a dense 
> representation, and some of them perform naive linear scans in the nextDoc 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9592) decorateDocValues cause serious performance issue because of using slowCompositeReaderWrapper

2016-10-04 Thread Takahiro Ishikawa (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545623#comment-15545623
 ] 

Takahiro Ishikawa commented on SOLR-9592:
-

Thanks for comments, Yonik!

I'm glad to resolve your concern!


bq. The method name change might help a little
Are you disagree with renaming(-1) or weakly agree with(+0)? 
I agree with renaming(Varun suggested). Because, at first glance, getLeafReader 
does not imply internally calling SlowCompositeLeafReader and this make people 
find performance bottlenecks difficult.


{quote}
but the real issue is knowing how to use things like MultiDocValues (i.e. you 
generally want to use them for the ord mapping, but not the other stuff!)
We should really cache the MultiDocValues created as well... but that can be a 
different JIRA.
{quote}
I may not catch the meaning here. If my understanding is clear, we should use 
MultiDocValues in cases where you essencially need global view and 
decorateDocValues usage is not the case right?
How to handle things like MultiDocValues in those cases(caching) is interesting 
problem and might be other JIRA.

> decorateDocValues cause serious performance issue because of using 
> slowCompositeReaderWrapper
> -
>
> Key: SOLR-9592
> URL: https://issues.apache.org/jira/browse/SOLR-9592
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers, search
>Affects Versions: 6.0, 6.1, 6.2
>Reporter: Takahiro Ishikawa
>  Labels: performance
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9592.patch, SOLR-9592.patch, SOLR-9592_6x.patch
>
>
> I have serious performance issue using AtomicUpdate (and RealtimeGet) with 
> non stored docValues.
> Because decorateDocValues try to merge each leafLeader on the fly via 
> slowCompositeReaderWrapper and it’s extremely slow (> 10sec).
> Simply access docValues via nonCompositeReader could resolve this 
> issue.(patch) 
> AtomicUpdate performance(or RealtimeGet performance)
> * Environment
> ** solr version : 6.0.0
> ** schema ~ 100 fields(90% docValues, some of those are multi valued)
> ** index : 5,000,000
> * Performance
> ** original :  > 10sec per query
> ** patched : at least 100msec per query
> This patch will also enhance search performance, because DocStreamer also 
> fetch docValues via decorateDocValues.
> Though it depends on each environment, I could take 20% search performance 
> gain.
> (This patch originally written for solr 6.0.0, and now rewritten for master)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7474) Improve doc values writers

2016-10-04 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7474:


 Summary: Improve doc values writers
 Key: LUCENE-7474
 URL: https://issues.apache.org/jira/browse/LUCENE-7474
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


One of the goals of the new iterator-based API is to better handle sparse data. 
However, the current doc values writers still use a dense representation, and 
some of them perform naive linear scans in the nextDoc implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7472) MultiFieldQueryParser.getFieldQuery() drops queries that are neither BooleanQuery nor TermQuery

2016-10-04 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-7472.

   Resolution: Fixed
 Assignee: Steve Rowe
Fix Version/s: 6.2.2
   6.3
   master (7.0)

Pushed to master, branch_6x and branch_6_2, with slightly different testing on 
master versus the other two branches, since the default split-on-whitespace 
query parser option, which affects multi-term synonyms used in the added test, 
will change on master/7.0.

On java-user mailing list, Oliver Kaleske reported:

{quote}
I locally applied the patch on branch_6_2 (because that is closest to my 
current 6.2.1 dependency) and built Lucene from there.
Using the outcome in my application, the problem observed there is fixed.
{quote}

> MultiFieldQueryParser.getFieldQuery() drops queries that are neither 
> BooleanQuery nor TermQuery 
> 
>
> Key: LUCENE-7472
> URL: https://issues.apache.org/jira/browse/LUCENE-7472
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (7.0), 6.3, 6.2.2
>
> Attachments: LUCENE-7472.patch
>
>
> From 
> [http://mail-archives.apache.org/mod_mbox/lucene-java-user/201609.mbox/%3c944985a6ac27425681bd27abe9d90...@ska-wn-e132.ptvag.ptv.de%3e],
>  Oliver Kaleske reports:
> {quote}
> Hi,
> in updating Lucene from 6.1.0 to 6.2.0 I came across the following:
> We have a subclass of MultiFieldQueryParser (MFQP) for creating a custom type 
> of Query, which calls getFieldQuery() on its base class (MFQP).
> For each of its search fields, this method has a Query created by calling 
> getFieldQuery() on QueryParserBase.
> Ultimately, we wind up in QueryBuilder's createFieldQuery() method, which 
> depending on the number of tokens (etc.) decides what type of Query to 
> return: a TermQuery, BooleanQuery, PhraseQuery, or MultiPhraseQuery.
> Back in MFQP.getFieldQuery(), a variable maxTerms is determined depending on 
> the type of Query returned: for a TermQuery or a BooleanQuery, its value will 
> in general be nonzero, clauses are created, and a non-null Query is returned.
> However, other Query subclasses result in maxTerms=0, an empty list of 
> clauses, and finally null is returned.
> To me, this seems like a bug, but I might as well be missing something. The 
> comment "// happens for stopwords" on the return null statement, however, 
> seems to suggest that Query types other than TermQuery and BooleanQuery were 
> not considered properly here.
> I should point out that our custom MFQP subclass so far does some rather 
> unsophisticated tokenization before calling getFieldQuery() on each token, so 
> characters like '*' may still slip through. So perhaps with proper 
> tokenization, it is guaranteed that only TermQuery and BooleanQuery can come 
> out of the chain of getFieldQuery() calls, and not handling 
> (Multi)PhraseQuery in MFQP.getFieldQuery() can never cause trouble?
> The code in MFQP.getFieldQuery dates back to
> LUCENE-2605: Add classic QueryParser option setSplitOnWhitespace() to control 
> whether to split on whitespace prior to text analysis.  Default behavior 
> remains unchanged: split-on-whitespace=true.
> (06 Jul 2016), when it was substantially expanded.
> Best regards,
> Oliver
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_102) - Build # 1870 - Failure!

2016-10-04 Thread Policeman Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 75, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-7472) MultiFieldQueryParser.getFieldQuery() drops queries that are neither BooleanQuery nor TermQuery

2016-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545600#comment-15545600
 ] 

ASF subversion and git services commented on LUCENE-7472:
-

Commit 1963b1701d2c331daa452ae6d16fc754c3e84bc4 in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1963b17 ]

LUCENE-7472: switch TestMultiFieldQueryParser.testSynonyms default 
split-on-whitespace to true (it's false on master/7.0)


> MultiFieldQueryParser.getFieldQuery() drops queries that are neither 
> BooleanQuery nor TermQuery 
> 
>
> Key: LUCENE-7472
> URL: https://issues.apache.org/jira/browse/LUCENE-7472
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7472.patch
>
>
> From 
> [http://mail-archives.apache.org/mod_mbox/lucene-java-user/201609.mbox/%3c944985a6ac27425681bd27abe9d90...@ska-wn-e132.ptvag.ptv.de%3e],
>  Oliver Kaleske reports:
> {quote}
> Hi,
> in updating Lucene from 6.1.0 to 6.2.0 I came across the following:
> We have a subclass of MultiFieldQueryParser (MFQP) for creating a custom type 
> of Query, which calls getFieldQuery() on its base class (MFQP).
> For each of its search fields, this method has a Query created by calling 
> getFieldQuery() on QueryParserBase.
> Ultimately, we wind up in QueryBuilder's createFieldQuery() method, which 
> depending on the number of tokens (etc.) decides what type of Query to 
> return: a TermQuery, BooleanQuery, PhraseQuery, or MultiPhraseQuery.
> Back in MFQP.getFieldQuery(), a variable maxTerms is determined depending on 
> the type of Query returned: for a TermQuery or a BooleanQuery, its value will 
> in general be nonzero, clauses are created, and a non-null Query is returned.
> However, other Query subclasses result in maxTerms=0, an empty list of 
> clauses, and finally null is returned.
> To me, this seems like a bug, but I might as well be missing something. The 
> comment "// happens for stopwords" on the return null statement, however, 
> seems to suggest that Query types other than TermQuery and BooleanQuery were 
> not considered properly here.
> I should point out that our custom MFQP subclass so far does some rather 
> unsophisticated tokenization before calling getFieldQuery() on each token, so 
> characters like '*' may still slip through. So perhaps with proper 
> tokenization, it is guaranteed that only TermQuery and BooleanQuery can come 
> out of the chain of getFieldQuery() calls, and not handling 
> (Multi)PhraseQuery in MFQP.getFieldQuery() can never cause trouble?
> The code in MFQP.getFieldQuery dates back to
> LUCENE-2605: Add classic QueryParser option setSplitOnWhitespace() to control 
> whether to split on whitespace prior to text analysis.  Default behavior 
> remains unchanged: split-on-whitespace=true.
> (06 Jul 2016), when it was substantially expanded.
> Best regards,
> Oliver
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7472) MultiFieldQueryParser.getFieldQuery() drops queries that are neither BooleanQuery nor TermQuery

2016-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545601#comment-15545601
 ] 

ASF subversion and git services commented on LUCENE-7472:
-

Commit 6739e075b4c1dedab3b49b1d299cd713135c1ec3 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6739e07 ]

LUCENE-7472: MultiFieldQueryParser.getFieldQuery() drops queries that are 
neither BooleanQuery nor TermQuery.


> MultiFieldQueryParser.getFieldQuery() drops queries that are neither 
> BooleanQuery nor TermQuery 
> 
>
> Key: LUCENE-7472
> URL: https://issues.apache.org/jira/browse/LUCENE-7472
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7472.patch
>
>
> From 
> [http://mail-archives.apache.org/mod_mbox/lucene-java-user/201609.mbox/%3c944985a6ac27425681bd27abe9d90...@ska-wn-e132.ptvag.ptv.de%3e],
>  Oliver Kaleske reports:
> {quote}
> Hi,
> in updating Lucene from 6.1.0 to 6.2.0 I came across the following:
> We have a subclass of MultiFieldQueryParser (MFQP) for creating a custom type 
> of Query, which calls getFieldQuery() on its base class (MFQP).
> For each of its search fields, this method has a Query created by calling 
> getFieldQuery() on QueryParserBase.
> Ultimately, we wind up in QueryBuilder's createFieldQuery() method, which 
> depending on the number of tokens (etc.) decides what type of Query to 
> return: a TermQuery, BooleanQuery, PhraseQuery, or MultiPhraseQuery.
> Back in MFQP.getFieldQuery(), a variable maxTerms is determined depending on 
> the type of Query returned: for a TermQuery or a BooleanQuery, its value will 
> in general be nonzero, clauses are created, and a non-null Query is returned.
> However, other Query subclasses result in maxTerms=0, an empty list of 
> clauses, and finally null is returned.
> To me, this seems like a bug, but I might as well be missing something. The 
> comment "// happens for stopwords" on the return null statement, however, 
> seems to suggest that Query types other than TermQuery and BooleanQuery were 
> not considered properly here.
> I should point out that our custom MFQP subclass so far does some rather 
> unsophisticated tokenization before calling getFieldQuery() on each token, so 
> characters like '*' may still slip through. So perhaps with proper 
> tokenization, it is guaranteed that only TermQuery and BooleanQuery can come 
> out of the chain of getFieldQuery() calls, and not handling 
> (Multi)PhraseQuery in MFQP.getFieldQuery() can never cause trouble?
> The code in MFQP.getFieldQuery dates back to
> LUCENE-2605: Add classic QueryParser option setSplitOnWhitespace() to control 
> whether to split on whitespace prior to text analysis.  Default behavior 
> remains unchanged: split-on-whitespace=true.
> (06 Jul 2016), when it was substantially expanded.
> Best regards,
> Oliver
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7472) MultiFieldQueryParser.getFieldQuery() drops queries that are neither BooleanQuery nor TermQuery

2016-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545598#comment-15545598
 ] 

ASF subversion and git services commented on LUCENE-7472:
-

Commit 12e7384b35a92a366e74af5fd4aed4f555ffd2da in lucene-solr's branch 
refs/heads/branch_6_2 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=12e7384 ]

LUCENE-7472: switch TestMultiFieldQueryParser.testSynonyms default 
split-on-whitespace to true (it's false on master/7.0)


> MultiFieldQueryParser.getFieldQuery() drops queries that are neither 
> BooleanQuery nor TermQuery 
> 
>
> Key: LUCENE-7472
> URL: https://issues.apache.org/jira/browse/LUCENE-7472
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7472.patch
>
>
> From 
> [http://mail-archives.apache.org/mod_mbox/lucene-java-user/201609.mbox/%3c944985a6ac27425681bd27abe9d90...@ska-wn-e132.ptvag.ptv.de%3e],
>  Oliver Kaleske reports:
> {quote}
> Hi,
> in updating Lucene from 6.1.0 to 6.2.0 I came across the following:
> We have a subclass of MultiFieldQueryParser (MFQP) for creating a custom type 
> of Query, which calls getFieldQuery() on its base class (MFQP).
> For each of its search fields, this method has a Query created by calling 
> getFieldQuery() on QueryParserBase.
> Ultimately, we wind up in QueryBuilder's createFieldQuery() method, which 
> depending on the number of tokens (etc.) decides what type of Query to 
> return: a TermQuery, BooleanQuery, PhraseQuery, or MultiPhraseQuery.
> Back in MFQP.getFieldQuery(), a variable maxTerms is determined depending on 
> the type of Query returned: for a TermQuery or a BooleanQuery, its value will 
> in general be nonzero, clauses are created, and a non-null Query is returned.
> However, other Query subclasses result in maxTerms=0, an empty list of 
> clauses, and finally null is returned.
> To me, this seems like a bug, but I might as well be missing something. The 
> comment "// happens for stopwords" on the return null statement, however, 
> seems to suggest that Query types other than TermQuery and BooleanQuery were 
> not considered properly here.
> I should point out that our custom MFQP subclass so far does some rather 
> unsophisticated tokenization before calling getFieldQuery() on each token, so 
> characters like '*' may still slip through. So perhaps with proper 
> tokenization, it is guaranteed that only TermQuery and BooleanQuery can come 
> out of the chain of getFieldQuery() calls, and not handling 
> (Multi)PhraseQuery in MFQP.getFieldQuery() can never cause trouble?
> The code in MFQP.getFieldQuery dates back to
> LUCENE-2605: Add classic QueryParser option setSplitOnWhitespace() to control 
> whether to split on whitespace prior to text analysis.  Default behavior 
> remains unchanged: split-on-whitespace=true.
> (06 Jul 2016), when it was substantially expanded.
> Best regards,
> Oliver
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7472) MultiFieldQueryParser.getFieldQuery() drops queries that are neither BooleanQuery nor TermQuery

2016-10-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545597#comment-15545597
 ] 

ASF subversion and git services commented on LUCENE-7472:
-

Commit 4ecc9d8eeac781ecb5f141491057a57226f61c6a in lucene-solr's branch 
refs/heads/branch_6_2 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4ecc9d8 ]

LUCENE-7472: move CHANGES.txt entry under 6.2.2 section


> MultiFieldQueryParser.getFieldQuery() drops queries that are neither 
> BooleanQuery nor TermQuery 
> 
>
> Key: LUCENE-7472
> URL: https://issues.apache.org/jira/browse/LUCENE-7472
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7472.patch
>
>
> From 
> [http://mail-archives.apache.org/mod_mbox/lucene-java-user/201609.mbox/%3c944985a6ac27425681bd27abe9d90...@ska-wn-e132.ptvag.ptv.de%3e],
>  Oliver Kaleske reports:
> {quote}
> Hi,
> in updating Lucene from 6.1.0 to 6.2.0 I came across the following:
> We have a subclass of MultiFieldQueryParser (MFQP) for creating a custom type 
> of Query, which calls getFieldQuery() on its base class (MFQP).
> For each of its search fields, this method has a Query created by calling 
> getFieldQuery() on QueryParserBase.
> Ultimately, we wind up in QueryBuilder's createFieldQuery() method, which 
> depending on the number of tokens (etc.) decides what type of Query to 
> return: a TermQuery, BooleanQuery, PhraseQuery, or MultiPhraseQuery.
> Back in MFQP.getFieldQuery(), a variable maxTerms is determined depending on 
> the type of Query returned: for a TermQuery or a BooleanQuery, its value will 
> in general be nonzero, clauses are created, and a non-null Query is returned.
> However, other Query subclasses result in maxTerms=0, an empty list of 
> clauses, and finally null is returned.
> To me, this seems like a bug, but I might as well be missing something. The 
> comment "// happens for stopwords" on the return null statement, however, 
> seems to suggest that Query types other than TermQuery and BooleanQuery were 
> not considered properly here.
> I should point out that our custom MFQP subclass so far does some rather 
> unsophisticated tokenization before calling getFieldQuery() on each token, so 
> characters like '*' may still slip through. So perhaps with proper 
> tokenization, it is guaranteed that only TermQuery and BooleanQuery can come 
> out of the chain of getFieldQuery() calls, and not handling 
> (Multi)PhraseQuery in MFQP.getFieldQuery() can never cause trouble?
> The code in MFQP.getFieldQuery dates back to
> LUCENE-2605: Add classic QueryParser option setSplitOnWhitespace() to control 
> whether to split on whitespace prior to text analysis.  Default behavior 
> remains unchanged: split-on-whitespace=true.
> (06 Jul 2016), when it was substantially expanded.
> Best regards,
> Oliver
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >