[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 26 - Still Failing

2016-03-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/26/

2 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsChaosMonkeyNothingIsSafeTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:55582/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:55582/collection1
at 
__randomizedtesting.SeedInfo.seed([8CE000DD1061C8CA:4B43F07BE9DA532]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.commit(AbstractFullDistribZkTestBase.java:1523)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:204)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7148) Support boolean subset matching

2016-03-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219346#comment-15219346
 ] 

David Smiley commented on LUCENE-7148:
--

FWIW I've done this years ago to implement search with special security rules 
that demand a user must have at least all of a set of auth tokens to match the 
document.  Today this is little easier thanks to {{FingerprintFilter}} on the 
indexing side.  On the search side, you can get creative with a {{RegexpQuery}} 
(easiest) or write a special Query implementation that works somewhat similar 
to but not the same as TermsQuery (what I had done)

> Support boolean subset matching
> ---
>
> Key: LUCENE-7148
> URL: https://issues.apache.org/jira/browse/LUCENE-7148
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/search
>Affects Versions: 5.x
>Reporter: Otmar Caduff
>  Labels: newbie
>
> In Lucene, I know of the possibility of Occur.SHOULD, Occur.MUST and the 
> “minimum should match” setting on the boolean query.
> Now, when querying, I want to
> - (1)  match the documents which either contain all the terms of the query 
> (Occur.MUST for all terms would do that) or,
> - (2)  if all terms for a given field of a document are a subset of the query 
> terms, that document should match as well.
> Example:
> Document d hast field f with terms A, B, C
> Query with the following terms should match that document:
> A
> B
> A B
> A B C
> A B C D
> Query with the following terms should not match:
> D
> A B D



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8925) Add VerticesStream to support vertex iteration

2016-03-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8925:
-
Description: As SOLR- is close to wrapping up, we can use the same 
parallel join approach to create a VerticesStream. The VerticesStream can be 
wrapped by other Streams to perform operations over the stream and will also 
provide basic vertex iteration capabilities for a Tinkerpop/Gremlin 
implementation.  (was: As SOLR- is close to wrapping up, we can use the 
same parallel join approach to create a VertexStream. The VertexStream can be 
wrapped by other Streams to perform operations over the stream and will also 
provide basic vertex iteration capabilities for a Tinkerpop/Gremlin 
implementation.)

> Add VerticesStream to support vertex iteration
> --
>
> Key: SOLR-8925
> URL: https://issues.apache.org/jira/browse/SOLR-8925
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>
> As SOLR- is close to wrapping up, we can use the same parallel join 
> approach to create a VerticesStream. The VerticesStream can be wrapped by 
> other Streams to perform operations over the stream and will also provide 
> basic vertex iteration capabilities for a Tinkerpop/Gremlin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8925) Add VerticesStream to support vertex iteration

2016-03-30 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-8925:


 Summary: Add VerticesStream to support vertex iteration
 Key: SOLR-8925
 URL: https://issues.apache.org/jira/browse/SOLR-8925
 Project: Solr
  Issue Type: New Feature
Reporter: Joel Bernstein


As SOLR- is close to wrapping up, we can use the same parallel join 
approach to create a VertexStream. The VertexStream can be wrapped by other 
Streams to perform operations over the stream and will also provide basic 
vertex iteration capabilities for a Tinkerpop/Gremlin implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.0 Release

2016-03-30 Thread David Smiley
That was an excellent summary Rob; thanks.
Minor nit: BBoxSpatialStrategy isn't/wasn't deprecated.  It was enhanced to
use PointValues.

I too would like to see the legacy numerics stay in "backwards-codecs" as
you describe with precisionStep specified on the Analyzer.

I disagree with Shawn about #5, that a user with a Solr 6.0 index must be
able to upgrade straight to 7.0.  Perhaps this has been the case for every
major release in the past, and it would be nice if it continues if for no
other reason than consistency.  But, IMO, that's kind of cosmetic -- it
isn't important.  What matters is that an eventual 6.x release occurs that
allows someone to upgrade to 7.0 -- that there's a path forward.  And that
one can always upgrade from one 6.x release to any greater 6.x release.

Quoting Adrien:
bq. Detour: In the future I wonder that we should consider having separate
release cycles again. In addition to giving Solr more time to use new
Lucene features like here, it would also remove the issue that we had when
releasing 5.3.2 after 5.4.0, which makes perfectly sense from a Solr
perspective but not from Lucene since it introduces blind spots in the
testing of index backward compatibility.

+1 to that!  I've had that thought.  It would be awesome for Solr to
release when it feels it's right, independently of Lucene.  If that's too
difficult/problematic then perhaps keep synchronizing releases but allow
Lucene & Solr's release version to vary.Then we'd be having a Solr 5.6
release here.

~ David

On Wed, Mar 30, 2016 at 9:39 PM Robert Muir  wrote:

> On Wed, Mar 30, 2016 at 12:43 PM, Adrien Grand  wrote:
> > Hi Shawn,
> >
> > I think marking the legacy fields/queries as deprecated in Lucene in 6.0
> is
> > the right thing to do in order to encourage users to migrate to the new
> > points API. If Solr needs to keep them around for 7.x, it would be fine
> to
> > move them to solr/ instead of lucene/ instead of a hard removal. Given
> that
> > it works on top of the postings API, it would not break.
>
> Also see my issue (https://issues.apache.org/jira/browse/LUCENE-7075)
> where I proposed to at least get things headed to the backwards/ jar.
> And the uninverting issue is still being discussed. If you look at
> linked issues you will see the deprecated encoding is involved with
> the following modules:
>
> * core (not just field/query/utils classes, but stuff like
> precisionStep in the .document api!)
> * spatial (Deprecated GeoPoint encoding etc)
> * spatial-extras (Deprecated Bbox encoding etc)
> * misc (UninvertingReader)
> * queryparser (flexible and xml)
> * join
>
> The purpose of that issue is to make sure people have the stuff they
> need to move their code of the old encoding. I personally thought this
> would make the transition easier, and it was finding bugs/problems in
> points and improving the apis. I imagined it would just be me, but i
> created a ton of linked issues all up front just in case. I did not
> think anyone else would really be excited to work on these, because
> its not particularly exciting stuff, but thanks Nick, David, Martijn,
> etc who did. I didn't try to plan any grandiose schemes of *actually
> pulling the old encoding out* because this was plenty on its own. I
> tried to work on the fieldcache only because I was talking to Tomas
> and he mentioned it as a difficulty in cutting over solr. But I bailed
> after encountering complexity, and don't think it is the way to go,
> read the issue for my explanation.
>
> To me, this is why we have a backwards compatibility policy for N-1,
> it has to be a volunteer thing for some of this stuff: can't all be on
> Mike.
>
> I do personally think it is enough to release, "removing" or "moving"
> deprecations is something to worry about for master branch.
>
> I did mention in the issue an idea for a first step would be to get
> the core/ stuff pulled out somewhere better.  Maybe the core/ stuff
> should go to the backwards-codec jar if we can detangle the
> deprecations from the .document api (e.g. maybe precisionStep can be a
> parameter on a tokenizer or analyzer or something, so its a little bit
> harder to use, but still works and not holding back core/'s .document
> api). But what to do about the other stuff?
>
> If i wanted to start removing deprecations now, I would be trying to
> just factor out the core/ NumericRangeQuery/NumericField stuff out to
> the backwards-codec jar. I hate modules depending on other ones, I
> really do, but just to iterate, I'd temporarily make all those other
> modules depend on backwards-codec/ jar and then remove deprecations
> from each one-by-one. Its too much to do all at once. I think we can
> do it this way iteratively without breaking solr.
>
> If solr wants to hang on to e.g. some spatial field with old numerics
> for an additional time (since it was still using it for 6.0), then the
> deprecated spatial field can be moved to solr. If not, lets nuke it.
>

[jira] [Commented] (SOLR-8914) ZkStateReader's refreshLiveNodes(Watcher) is not thread safe

2016-03-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219286#comment-15219286
 ] 

Steve Rowe commented on SOLR-8914:
--

Zero failures on second 100 iterations.

> ZkStateReader's refreshLiveNodes(Watcher) is not thread safe
> 
>
> Key: SOLR-8914
> URL: https://issues.apache.org/jira/browse/SOLR-8914
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8914.patch, SOLR-8914.patch, SOLR-8914.patch, 
> SOLR-8914.patch, jenkins.thetaphi.de_Lucene-Solr-6.x-Solaris_32.log.txt, 
> live_node_mentions_port56361_with_threadIds.log.txt, 
> live_nodes_mentions.log.txt
>
>
> Jenkin's encountered a failure in TestTolerantUpdateProcessorCloud over the 
> weekend
> {noformat}
> http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/32/consoleText
> Checking out Revision c46d7686643e7503304cb35dfe546bce9c6684e7 
> (refs/remotes/origin/branch_6x)
> Using Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC
> {noformat}
> The failure happened during the static setup of the test, when a 
> MiniSolrCloudCluster & several clients are initialized -- before any code 
> related to TolerantUpdateProcessor is ever used.
> I can't reproduce this, or really make sense of what i'm (not) seeing here in 
> the logs, so i'm filing this jira with my analysis in the hopes that someone 
> else can help make sense of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8922) DocSetCollector can allocate massive garbage on large indexes

2016-03-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219235#comment-15219235
 ] 

Erick Erickson commented on SOLR-8922:
--

I'm also wondering if there are some tricks where we could re-use memory rather 
than allocate new all the time. I guess the reason I'm obsessing about this is 
that anything that accounts for this much of the garbage collected seems like 
it's worth the effort. This certainly seems like one of those odd places where 
efficiency might trump simplicity..

I wonder if there's something we could do that would allocate some kind of 
reusable pool. I'm thinking of something stupid-simple that we could use for 
benchmarking (not for commit or production) to get a handle on the dimension of 
the problem and how broadly it would apply...

After all, GC is one of the things we spend a _lot_ of time on when supporting 
Solr.

> DocSetCollector can allocate massive garbage on large indexes
> -
>
> Key: SOLR-8922
> URL: https://issues.apache.org/jira/browse/SOLR-8922
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jeff Wartes
> Attachments: SOLR-8922.patch
>
>
> After reaching a point of diminishing returns tuning the GC collector, I 
> decided to take a look at where the garbage was coming from. To my surprise, 
> it turned out that for my index and query set, almost 60% of the garbage was 
> coming from this single line:
> https://github.com/apache/lucene-solr/blob/94c04237cce44cac1e40e1b8b6ee6a6addc001a5/solr/core/src/java/org/apache/solr/search/DocSetCollector.java#L49
> This is due to the simple fact that I have 86M documents in my shards. 
> Allocating a scratch array big enough to track a result set 1/64th of my 
> index (1.3M) is also almost certainly excessive, considering my 99.9th 
> percentile hit count is less than 56k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8875) ZkStateWriter.writePendingUpdates() can return null clusterState; NPE in Overseer

2016-03-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219229#comment-15219229
 ] 

David Smiley commented on SOLR-8875:


This code is new to me but what you say resonates with me.  I also got the 
sense these two things were trying to keep track of the clusterState, and that 
seems redundant.  When I get back to SOLR-5750 in the next few days I'll apply 
this and see if it goes away.  If so I'll commit it.  It'll be good to have 
Jenkins beating on it for awhile.

> ZkStateWriter.writePendingUpdates() can return null clusterState; NPE in 
> Overseer
> -
>
> Key: SOLR-8875
> URL: https://issues.apache.org/jira/browse/SOLR-8875
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: David Smiley
> Attachments: SOLR-8875.patch
>
>
> While trying to get the test in Varun's patch in SOLR-5750 (SolrCloud 
> collection backup & restore) to succeed, I kept hitting an NPE in Overseer in 
> which clusterState was null.  I added a bunch of asserts and found where it 
> was happening which I worked around temporarily.  See 
> https://github.com/apache/lucene-solr/commit/fd9c4d59e8dbe0e9fbd99a40ed2ff746c1ae7a0c#diff-9ed614eee66b9e685d73446b775dc043R247
>  which is ZkStateWriter.writePendingUpdates() returning null, overwriting the 
> current non-null clusterState.  This happens nearly every time for me when I 
> run that test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8924) RollupStream breaks with null values in the group by buckets

2016-03-30 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-8924:


 Summary: RollupStream breaks with null values in the group by 
buckets
 Key: SOLR-8924
 URL: https://issues.apache.org/jira/browse/SOLR-8924
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein


Currently the RollupStream throws an NPE when there are null values in the 
rollup up buckets. This effects the SQL group by queries in map_reduce mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8812) ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND

2016-03-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219210#comment-15219210
 ] 

Erick Erickson commented on SOLR-8812:
--

Thanks to you both for jumping in here. I'll see what I can do to get this into 
any 5.5.1 and also 6.0 (and trunk of course).

[~janhoy] What do you think? We can raise adding tests as a separate JIRA...

> ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND
> 
>
> Key: SOLR-8812
> URL: https://issues.apache.org/jira/browse/SOLR-8812
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 5.5
>Reporter: Ryan Steinberg
>Assignee: Erick Erickson
>Priority: Blocker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8812.patch
>
>
> The edismax parser ignores Boolean OR in queries when q.op=AND. This behavior 
> is new to Solr 5.5.0 and an unexpected major change.
> Example:
>   "q": "id:12345 OR zz",
>   "defType": "edismax",
>   "q.op": "AND",
> where "12345" is a known document ID and "zz" is a string NOT present 
> in my data
> Version 5.5.0 produces zero results:
> "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+((id:12345 
> DisjunctionMaxQuery((text:zz)))~2))/no_coord",
> "parsedquery_toString": "+((id:12345 (text:zz))~2)",
> "explain": {},
> "QParser": "ExtendedDismaxQParser"
> Version 5.4.0 produces one result as expected
>   "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+(id:12345 
> DisjunctionMaxQuery((text:zz/no_coord",
> "parsedquery_toString": "+(id:12345 (text:zz))"
> "explain": {},
> "QParser": "ExtendedDismaxQParser"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.0 Release

2016-03-30 Thread Robert Muir
On Wed, Mar 30, 2016 at 12:43 PM, Adrien Grand  wrote:
> Hi Shawn,
>
> I think marking the legacy fields/queries as deprecated in Lucene in 6.0 is
> the right thing to do in order to encourage users to migrate to the new
> points API. If Solr needs to keep them around for 7.x, it would be fine to
> move them to solr/ instead of lucene/ instead of a hard removal. Given that
> it works on top of the postings API, it would not break.

Also see my issue (https://issues.apache.org/jira/browse/LUCENE-7075)
where I proposed to at least get things headed to the backwards/ jar.
And the uninverting issue is still being discussed. If you look at
linked issues you will see the deprecated encoding is involved with
the following modules:

* core (not just field/query/utils classes, but stuff like
precisionStep in the .document api!)
* spatial (Deprecated GeoPoint encoding etc)
* spatial-extras (Deprecated Bbox encoding etc)
* misc (UninvertingReader)
* queryparser (flexible and xml)
* join

The purpose of that issue is to make sure people have the stuff they
need to move their code of the old encoding. I personally thought this
would make the transition easier, and it was finding bugs/problems in
points and improving the apis. I imagined it would just be me, but i
created a ton of linked issues all up front just in case. I did not
think anyone else would really be excited to work on these, because
its not particularly exciting stuff, but thanks Nick, David, Martijn,
etc who did. I didn't try to plan any grandiose schemes of *actually
pulling the old encoding out* because this was plenty on its own. I
tried to work on the fieldcache only because I was talking to Tomas
and he mentioned it as a difficulty in cutting over solr. But I bailed
after encountering complexity, and don't think it is the way to go,
read the issue for my explanation.

To me, this is why we have a backwards compatibility policy for N-1,
it has to be a volunteer thing for some of this stuff: can't all be on
Mike.

I do personally think it is enough to release, "removing" or "moving"
deprecations is something to worry about for master branch.

I did mention in the issue an idea for a first step would be to get
the core/ stuff pulled out somewhere better.  Maybe the core/ stuff
should go to the backwards-codec jar if we can detangle the
deprecations from the .document api (e.g. maybe precisionStep can be a
parameter on a tokenizer or analyzer or something, so its a little bit
harder to use, but still works and not holding back core/'s .document
api). But what to do about the other stuff?

If i wanted to start removing deprecations now, I would be trying to
just factor out the core/ NumericRangeQuery/NumericField stuff out to
the backwards-codec jar. I hate modules depending on other ones, I
really do, but just to iterate, I'd temporarily make all those other
modules depend on backwards-codec/ jar and then remove deprecations
from each one-by-one. Its too much to do all at once. I think we can
do it this way iteratively without breaking solr.

If solr wants to hang on to e.g. some spatial field with old numerics
for an additional time (since it was still using it for 6.0), then the
deprecated spatial field can be moved to solr. If not, lets nuke it.

To me this seems the least controversial path, and its something that
can be done iteratively. It has the downside of keeping "core"
deprecated legacy numerics around for an extra major release in the
backwards-codec jar. I think this "extra" back compat is ok in this
case. Uwe made clean code :)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+111-patched) - Build # 303 - Still Failing!

2016-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/303/
Java: 64bit/jdk-9-ea+111-patched -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 
-XX:-CompactStrings

All tests passed

Build Log:
[...truncated 12304 lines...]
   [junit4] JVM J2: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/temp/junit4-J2-20160331_005919_647.sysout
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7f111b60ef80, pid=1566, tid=1627
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (9.0+111) (build 
9-ea+111)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (9-ea+111, mixed mode, 
tiered, concurrent mark sweep gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0xa23f80]  
MarkSweep::IsAliveClosure::do_object_b(oopDesc*)+0x0
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/hs_err_pid1566.log
   [junit4] Compiled method (c2) 1999876 37505   !   4   
org.apache.solr.update.SolrCmdDistributor::submit (301 bytes)
   [junit4]  total in heap  [0x7f1105a8cf10,0x7f1105a9e6f8] = 71656
   [junit4]  relocation [0x7f1105a8d050,0x7f1105a8dd90] = 3392
   [junit4]  main code  [0x7f1105a8dda0,0x7f1105a96c60] = 36544
   [junit4]  stub code  [0x7f1105a96c60,0x7f1105a97380] = 1824
   [junit4]  oops   [0x7f1105a97380,0x7f1105a97410] = 144
   [junit4]  metadata   [0x7f1105a97410,0x7f1105a97a20] = 1552
   [junit4]  scopes data[0x7f1105a97a20,0x7f1105a9b278] = 14424
   [junit4]  scopes pcs [0x7f1105a9b278,0x7f1105a9cfb8] = 7488
   [junit4]  dependencies   [0x7f1105a9cfb8,0x7f1105a9d1a0] = 488
   [junit4]  handler table  [0x7f1105a9d1a0,0x7f1105a9e388] = 4584
   [junit4]  nul chk table  [0x7f1105a9e388,0x7f1105a9e6f8] = 880
   [junit4] Compiled method (c2) 1999876 37505   !   4   
org.apache.solr.update.SolrCmdDistributor::submit (301 bytes)
   [junit4]  total in heap  [0x7f1105a8cf10,0x7f1105a9e6f8] = 71656
   [junit4]  relocation [0x7f1105a8d050,0x7f1105a8dd90] = 3392
   [junit4]  main code  [0x7f1105a8dda0,0x7f1105a96c60] = 36544
   [junit4]  stub code  [0x7f1105a96c60,0x7f1105a97380] = 1824
   [junit4]  oops   [0x7f1105a97380,0x7f1105a97410] = 144
   [junit4]  metadata   [0x7f1105a97410,0x7f1105a97a20] = 1552
   [junit4]  scopes data[0x7f1105a9
   [junit4] 7a20,0x7f1105a9b278] = 14424
   [junit4]  scopes pcs [0x7f1105a9b278,0x7f1105a9cfb8] = 7488
   [junit4]  dependencies   [0x7f1105a9cfb8,0x7f1105a9d1a0] = 488
   [junit4]  handler table  [0x7f1105a9d1a0,0x7f1105a9e388] = 4584
   [junit4]  nul chk table  [0x7f1105a9e388,0x7f1105a9e6f8] = 880
   [junit4] Compiled method (c2) 1999877 37505   !   4   
org.apache.solr.update.SolrCmdDistributor::submit (301 bytes)
   [junit4]  total in heap  [0x7f1105a8cf10,0x7f1105a9e6f8] = 71656
   [junit4]  relocation [0x7f1105a8d050,0x7f1105a8dd90] = 3392
   [junit4]  main code  [0x7f1105a8dda0,0x7f1105a96c60] = 36544
   [junit4]  stub code  [0x7f1105a96c60,0x7f1105a97380] = 1824
   [junit4]  oops   [0x7f1105a97380,0x7f1105a97410] = 144
   [junit4]  metadata   [0x7f1105a97410,0x7f1105a97a20] = 1552
   [junit4]  scopes data[0x7f1105a97a20,0x7f1105a9b278] = 14424
   [junit4]  scopes pcs [0x7f1105a9b278,0x7f1105a9cfb8] = 7488
   [junit4]  dependencies   [0x7f1105a9cfb8,0x7f1105a9d1a0] = 488
   [junit4]  handler table  [0x7f1105a9d1a0,0x7f1105a9e388] = 4584
   [junit4]  nul chk table  [0x7f1105a9e388,0x7f1105a9e6f8] = 880
   [junit4] Compiled method (c2) 1999877 37505   !   4   
org.apache.solr.update.SolrCmdDistributor::submit (301 bytes)
   [junit4]  total in heap  [0x7f1105a8cf10,0x7f1105a9e6f8] = 71656
   [junit4]  relocation [0x7f1105a8d050,0x7f1105a8dd90] = 3392
   [junit4]  main code  [0x7f1105a8dda0,0x7f1105a96c60] = 36544
   [junit4]  stub code  [0x7f1105a96c60,0x7f1105a97380] = 1824
   [junit4]  oops   [0x7f1105a97380,0x7f1105a97410] = 144
   [junit4]  metadata   [0x7f1105a97410,0x7f1105a97a20] = 1552
   [junit4]  scopes data[0x7f1105a97a20,0x7f1105a9b278] = 14424
   [junit4]  scopes pcs [0x7f1105a9b278,0x7f1105a9cfb8] = 7488
   [junit4]  dependencies   

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 44 - Failure!

2016-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/44/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy

Error Message:
Could not find collection : c1

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : c1
at 
__randomizedtesting.SeedInfo.seed([214EB7804F22BBDB:4A0117FD362D66E1]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:135)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy(ZkStateReaderTest.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11336 lines...]
   [junit4] Suite: org.apache.solr.cloud.overseer.ZkStateReaderTest
   [junit4]   2> 

[jira] [Commented] (SOLR-8914) ZkStateReader's refreshLiveNodes(Watcher) is not thread safe

2016-03-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219154#comment-15219154
 ] 

Erick Erickson commented on SOLR-8914:
--

Nope, completely re-did the test and got another failure on run 31. Here are 
the failure snippets

[junit4] FAILURE 94.1s | TestStressLiveNodes.testStress <<<
   [junit4]> Throwable #1: java.lang.AssertionError: iter1263: 
[127.0.0.1:53372_solr, thrasher-T1262_0-0] expected:<1> but was:<2>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([3E42073F9773C752:2B597AFE14843F29]:0)
   [junit4]>at 
org.apache.solr.cloud.TestStressLiveNodes.testStress(TestStressLiveNodes.java:137)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   

   ***

  [junit4] FAILURE 75.7s | TestStressLiveNodes.testStress <<<
   [junit4]> Throwable #1: java.lang.AssertionError: iter2373 6 != 1 
expected:<[127.0.0.1:61240_solr, thrasher-T2373_0-0, thrasher-T2373_1-0, 
thrasher-T2373_2-0, thrasher-T2373_3-0, thrasher-T2373_4-0]> but 
was:<[127.0.0.1:61240_solr]>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([20645E3B72A746D3:357F23FAF150BEA8]:0)
   [junit4]>at 
org.apache.solr.cloud.TestStressLiveNodes.testStress(TestStressLiveNodes.java:200)
   [junit4]>at java.lang.Thread.run(Thread.java:745)


**


   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestStressLiveNodes 
-Dtests.method=testStress -Dtests.seed=E803236A2774DE4C -Dtests.nightly=true 
-Dtests.slow=true -Dtests.locale=sr-RS -Dtests.timezone=Pacific/Majuro 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   62.7s | TestStressLiveNodes.testStress <<<
   [junit4]> Throwable #1: org.apache.solr.common.SolrException: 
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:62845/solr within 3 ms
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([E803236A2774DE4C:FD185EABA4832637]:0)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:181)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:115)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:110)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:97)
   [junit4]>at 
org.apache.solr.cloud.TestStressLiveNodes.newSolrZkClient(TestStressLiveNodes.java:87)
   [junit4]>at 
org.apache.solr.cloud.TestStressLiveNodes.access$000(TestStressLiveNodes.java:54)
   [junit4]>at 
org.apache.solr.cloud.TestStressLiveNodes$LiveNodeTrasher.(TestStressLiveNodes.java:225)
   [junit4]>at 
org.apache.solr.cloud.TestStressLiveNodes.testStress(TestStressLiveNodes.java:174)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]> Caused by: java.util.concurrent.TimeoutException: Could not 
connect to ZooKeeper 127.0.0.1:62845/solr within 3 ms
   [junit4]>at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:228)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:173)
   [junit4]>... 46 more
   [ju

   ***

   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestStressLiveNodes 
-Dtests.method=testStress -Dtests.seed=6B315674F529C1E2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.locale=ar-IQ -Dtests.timezone=Europe/Bratislava 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR113s | TestStressLiveNodes.testStress <<<
   [junit4]> Throwable #1: org.apache.solr.common.SolrException: 
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:62849/solr within 3 ms
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([6B315674F529C1E2:7E2A2BB576DE3999]:0)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:181)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:115)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:110)
   [junit4]>at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:97)
   [junit4]>at 
org.apache.solr.cloud.TestStressLiveNodes.newSolrZkClient(TestStressLiveNodes.java:87)
   [junit4]>at 
org.apache.solr.cloud.TestStressLiveNodes.access$000(TestStressLiveNodes.java:54)
   [junit4]>at 
org.apache.solr.cloud.TestStressLiveNodes$LiveNodeTrasher.(TestStressLiveNodes.java:225)
   [junit4]>at 
org.apache.solr.cloud.TestStressLiveNodes.testStress(TestStressLiveNodes.java:174)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]> Caused by: java.util.concurrent.TimeoutException: Could 

[jira] [Commented] (SOLR-8812) ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND

2016-03-30 Thread Greg Pendlebury (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219133#comment-15219133
 ] 

Greg Pendlebury commented on SOLR-8812:
---

Thanks. Hopefully that is ok. I just installed git and started cloning trunk... 
now to upgrade to Java 8.

I think it is all working as intended, it is just that there is a confusing 
legacy of not having to worry about what mm was set to for some use cases. 
SOLR-2649 will force people to check what the parameters are, but all queries 
are now supported.

It would be nice if it was less disruptive, but given that pre-patch there was 
no way to get edismax to do certain queries, no matter what parameters you set, 
I think it is still an improvement.

> ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND
> 
>
> Key: SOLR-8812
> URL: https://issues.apache.org/jira/browse/SOLR-8812
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 5.5
>Reporter: Ryan Steinberg
>Assignee: Erick Erickson
>Priority: Blocker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8812.patch
>
>
> The edismax parser ignores Boolean OR in queries when q.op=AND. This behavior 
> is new to Solr 5.5.0 and an unexpected major change.
> Example:
>   "q": "id:12345 OR zz",
>   "defType": "edismax",
>   "q.op": "AND",
> where "12345" is a known document ID and "zz" is a string NOT present 
> in my data
> Version 5.5.0 produces zero results:
> "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+((id:12345 
> DisjunctionMaxQuery((text:zz)))~2))/no_coord",
> "parsedquery_toString": "+((id:12345 (text:zz))~2)",
> "explain": {},
> "QParser": "ExtendedDismaxQParser"
> Version 5.4.0 produces one result as expected
>   "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+(id:12345 
> DisjunctionMaxQuery((text:zz/no_coord",
> "parsedquery_toString": "+(id:12345 (text:zz))"
> "explain": {},
> "QParser": "ExtendedDismaxQParser"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8914) ZkStateReader's refreshLiveNodes(Watcher) is not thread safe

2016-03-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219119#comment-15219119
 ] 

Erick Erickson commented on SOLR-8914:
--

I had 4/100 iterations fail. Let me try it all again to insure that I applied 
the right patch, saved the file and all that etc.

> ZkStateReader's refreshLiveNodes(Watcher) is not thread safe
> 
>
> Key: SOLR-8914
> URL: https://issues.apache.org/jira/browse/SOLR-8914
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8914.patch, SOLR-8914.patch, SOLR-8914.patch, 
> SOLR-8914.patch, jenkins.thetaphi.de_Lucene-Solr-6.x-Solaris_32.log.txt, 
> live_node_mentions_port56361_with_threadIds.log.txt, 
> live_nodes_mentions.log.txt
>
>
> Jenkin's encountered a failure in TestTolerantUpdateProcessorCloud over the 
> weekend
> {noformat}
> http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/32/consoleText
> Checking out Revision c46d7686643e7503304cb35dfe546bce9c6684e7 
> (refs/remotes/origin/branch_6x)
> Using Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC
> {noformat}
> The failure happened during the static setup of the test, when a 
> MiniSolrCloudCluster & several clients are initialized -- before any code 
> related to TolerantUpdateProcessor is ever used.
> I can't reproduce this, or really make sense of what i'm (not) seeing here in 
> the logs, so i'm filing this jira with my analysis in the hopes that someone 
> else can help make sense of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8812) ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND

2016-03-30 Thread Ryan Steinberg (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219120#comment-15219120
 ] 

Ryan Steinberg commented on SOLR-8812:
--

I tested explicitly setting mm to 0 and all of my tests passed. I also added a 
mm=0 to the failing test case from [~janhoy] and it passed too.

[~gpendleb], I think your suspicion about mm defaulting to 100% is correct.

> ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND
> 
>
> Key: SOLR-8812
> URL: https://issues.apache.org/jira/browse/SOLR-8812
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 5.5
>Reporter: Ryan Steinberg
>Assignee: Erick Erickson
>Priority: Blocker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8812.patch
>
>
> The edismax parser ignores Boolean OR in queries when q.op=AND. This behavior 
> is new to Solr 5.5.0 and an unexpected major change.
> Example:
>   "q": "id:12345 OR zz",
>   "defType": "edismax",
>   "q.op": "AND",
> where "12345" is a known document ID and "zz" is a string NOT present 
> in my data
> Version 5.5.0 produces zero results:
> "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+((id:12345 
> DisjunctionMaxQuery((text:zz)))~2))/no_coord",
> "parsedquery_toString": "+((id:12345 (text:zz))~2)",
> "explain": {},
> "QParser": "ExtendedDismaxQParser"
> Version 5.4.0 produces one result as expected
>   "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+(id:12345 
> DisjunctionMaxQuery((text:zz/no_coord",
> "parsedquery_toString": "+(id:12345 (text:zz))"
> "explain": {},
> "QParser": "ExtendedDismaxQParser"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-03-30 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219107#comment-15219107
 ] 

Ryan Ernst commented on LUCENE-6938:


It looks like the branch detection logic isn't working given our current naming 
conventions we have in git now. This is the current logic, which worked in svn:

{code}
  if branchName == b'master':
return 'master'
  if branchName.startswith(b'branch_'):
return 'stable'
  if branchName.startswith(b'lucene_solr_'):
return 'release'
{code}

But we now name our release branches like {{branch_5_5}} instead of 
{{lucene_solr_5_5}}. So in this case, the script thought it was a stable 
branch, and thus the version was added as deprecated.

> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master, 5.x
>
> Attachments: LUCENE-6938-1.patch, LUCENE-6938-wc-checker.patch, 
> LUCENE-6938-wc-checker.patch, LUCENE-6938.patch, LUCENE-6938.patch, 
> LUCENE-6938.patch, LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7096) UninvertingReader needs multi-valued points support

2016-03-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219098#comment-15219098
 ] 

Robert Muir commented on LUCENE-7096:
-

Ishan, I don't think it really impacts solr honestly. The thing about points is 
that they can do a lot more than old numerics. Not only do you expand to 128 
bits per value, but you can have up to 8 dimensions.

So really you have to decide what makes sense based on what you are trying to 
do. If you just want to cover single-dimensional primitive types, SortedNumeric 
is an obvious choice (it will not "lose" frequency in a doc, which points does 
not "lose" either). If its a float or double, vs an int or a long, you may want 
to handle it a little differently, e.g. use NumericUtils.sortableDoubleBits so 
that "sortedness" within a document has true meaning. This can make sort 
comparators based on min/max/median work in constant time. But if you go that 
way, I think e.g. faceting/grouping/etc code in solr would need to be modified 
to support that.

On the other hand, for InetAddressPoint (128-bit ipv6), SortedSet is a much 
better choice. Prefix compression basically maps to "compress by network" and 
is not just important for ipv6, but also important for mapped ipv4 data or 
mixed ipv4/ipv6 data (Points/BKD tree has this compression too). Otherwise its 
really using 128-bits storage per value. Sure, you pay a cost for ordinals, but 
ordinals are only 32-bit and will speed up both sorting and faceting (to me: a 
variant of range faceting like "facet-by-network" would be the obvious use case 
there, so ordinals work for that too).

With multidimensional data there is no clear answer. Currently LatLonPoint uses 
2 dimensions of 32-bits each for searching, and shoves them into a single 
64-bit SortedNumeric for sorting and two-phase iteration support. This works 
well because e.g. the typical hotspot in its sort comparator only works on the 
integer value most of the time anyway, and two-phase support is only needed for 
edge cases. 

For Geo3DPoint, who knows? I don't yet have a good understanding of how 
expensive its single-doc verification methods are (i think distance is cheaper 
than 2D, but polygon? dunno), how rare they are, or what would be the best way 
to represent them yet. Maybe its still better to store it in 2D 
(SortedNumeric), reuse that one's same sort comparator if the distance metrics 
are compatible :) If two-phase support is not needed this may work. If its only 
needed in very rare cases we could even convert 2D->3D on the fly or optimize 
it so that conversion is very rare. But maybe this is too complex and a binary 
encoding would be better. 

So I'm hesitant to add new types to UninvertingReader for this reason, 
especially when values are larger and so on. If you really think its the right 
way to go anyway, feel free to pick up my branch 
(https://github.com/rmuir/lucene-solr/tree/fc2) but it only contains API 
changes, no actual uninverting. I'm not really against it for primitive 1D 
numeric types, I'd just rather work on other things, and I feel like its not 
the best direction.

> UninvertingReader needs multi-valued points support
> ---
>
> Key: LUCENE-7096
> URL: https://issues.apache.org/jira/browse/LUCENE-7096
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7096.patch
>
>
> It now supports the single valued case (deprecating the legacy encoding), but 
> the multi-valued stuff does not yet have a replacement.
> ideally we add a FC.getSortedNumeric(Parser..) that works from points. Unlike 
> postings, points never lose frequency within a field, so its the best fit. 
> when getDocCount() == size(), the field is single-valued, so this should call 
> getNumeric and box that in SortedNumeric, similar to the String case.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8812) ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND

2016-03-30 Thread Greg Pendlebury (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219086#comment-15219086
 ] 

Greg Pendlebury commented on SOLR-8812:
---

I am happy to take a look at any issues, since I was involved in SOLR-2649. I 
need to get a new copy of the code first, but in the interim, can someone 
confirm that explicitly setting mm to 0 does not fix this? I believe mm 
defaults to 100%, so that may be the real culprit, as opposed to q.op=AND. 
Before SOLR-2649 was resolved, setting an OR operator would have caused mm to 
be ignored. Now it will use the default value unless you set it explicitly.

Our production servers are using 5.1 with SOLR-2649 applied, and we have 
q.op=AND, with perfectly functional OR operators and mm=0%. All of the obvious 
queries work, including the cases referenced above.

>From memory there are a lot of subtle cliffs to fall off here, such as making 
>sure we are talking about top level clauses and ultimately remembering that 
>Solr does not use boolean logic... and there are some edge cases where it 
>simply doesn't work the same way as the occurs flags. SHOULD vs OR is the main 
>culprit.

> ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND
> 
>
> Key: SOLR-8812
> URL: https://issues.apache.org/jira/browse/SOLR-8812
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 5.5
>Reporter: Ryan Steinberg
>Assignee: Erick Erickson
>Priority: Blocker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8812.patch
>
>
> The edismax parser ignores Boolean OR in queries when q.op=AND. This behavior 
> is new to Solr 5.5.0 and an unexpected major change.
> Example:
>   "q": "id:12345 OR zz",
>   "defType": "edismax",
>   "q.op": "AND",
> where "12345" is a known document ID and "zz" is a string NOT present 
> in my data
> Version 5.5.0 produces zero results:
> "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+((id:12345 
> DisjunctionMaxQuery((text:zz)))~2))/no_coord",
> "parsedquery_toString": "+((id:12345 (text:zz))~2)",
> "explain": {},
> "QParser": "ExtendedDismaxQParser"
> Version 5.4.0 produces one result as expected
>   "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+(id:12345 
> DisjunctionMaxQuery((text:zz/no_coord",
> "parsedquery_toString": "+(id:12345 (text:zz))"
> "explain": {},
> "QParser": "ExtendedDismaxQParser"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8923) HttpSolrCall assumes request was a GET when processing errors

2016-03-30 Thread Mike Drob (JIRA)
Mike Drob created SOLR-8923:
---

 Summary: HttpSolrCall assumes request was a GET when processing 
errors
 Key: SOLR-8923
 URL: https://issues.apache.org/jira/browse/SOLR-8923
 Project: Solr
  Issue Type: Bug
Reporter: Mike Drob
Priority: Minor


In SOLR-7681 we discovered that methods other than GET will work for Solr HTTP 
requests. However, in HttpSolrCall we assume that only a GET method is possible 
when returning an error message.

Looking into the specific usage, I don't think this assumption is dangerous, 
but it would be nice to extract the actual method and use that instead.

https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L621



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.0 Release

2016-03-30 Thread Jack Krupansky
Adrien, sure, Solr could take ownership of the so-called legacy (trie)
numerics, but then what would Elasticsearch do for its users when ES
upgrades to Lucene 7.0 which would then no longer have the code to handle
existing Trie-based indexes? Would ES then start depending on Solr?! Or is
ES planning on somehow automatically migrating ES 2.x indexes from Trie
numeric fields to PointValues?

Is ES going to do its own Trie to PV converter?

Will Solr also need to do its own Trie to PV converter for 7.0?

It does seem very odd to me that the Lucene guys have made this switch to
PV without any migration tool for existing Trie indexes.

-- Jack Krupansky

On Wed, Mar 30, 2016 at 6:56 PM, Adrien Grand  wrote:

> Le mer. 30 mars 2016 à 22:56, Shawn Heisey  a écrit :
>
>> These are the choices I see to address the problem, in decreasing order
>> of personal preference:
>>
>> 1) Revert LUCENE-6917 in the 6.x versions, move the deprecation to master.
>> 2) Delay the Lucene/Solr 6.0.0 release so Solr has new field classes and
>> updated examples.
>> 3) Keep to the schedule for the Lucene 6.0.0 release, but do NOT release
>> Solr 6.0.0.  Do a synchronized Lucene/Solr release of 6.0.1 or 6.1.0
>> with new Solr classes and examples.
>> 4) Move the deprecated Lucene classes to the Solr 7.0 package space
>> (still deprecated) as suggested by Adrien.  Fully remove them in 8.0.
>> 5) Compromise Solr's historical guarantees of major version backward
>> compatibility.
>>
>
> I am confused why you put 1 before 4: to me they are the same from a Solr
> perspective, and 4 is better than 1 from a Lucene perspective since it
> makes the path forward clearer?
>
> I think the only reasonable alternative to 4 is 2, which like you said
> would be disappointing. I don't think anybody wants 5, and 3 feels awkward
> to me. Detour: In the future I wonder that we should consider having
> separate release cycles again. In addition to giving Solr more time to use
> new Lucene features like here, it would also remove the issue that we had
> when releasing 5.3.2 after 5.4.0, which makes perfectly sense from a Solr
> perspective but not from Lucene since it introduces blind spots in the
> testing of index backward compatibility.
>
> Back to the current issue, my preference would go for 4. I could be wrong
> but I think it is also consistent with the fact that Solr historically kept
> compatibility for a longer time than Lucene (eg. by still supporting
> IntField or allowing uninverting out of the box).
>


[jira] [Commented] (LUCENE-7094) spatial-extras BBoxStrategy and (confusingly!) PointVectorStrategy use legacy numeric encoding

2016-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219073#comment-15219073
 ] 

ASF subversion and git services commented on LUCENE-7094:
-

Commit 08dae30f738b5766b29600cf58dbaa74419ea0fa in lucene-solr's branch 
refs/heads/branch_6x from nknize
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=08dae30 ]

* LUCENE-7094: BBoxStrategy and PointVectorStrategy now support PointValues (in 
addition to legacy numeric trie).  Their APIs were changed a little and also 
made more consistent.  PointValues/Trie is optional, DocValues is optional, 
stored value is optional.


> spatial-extras BBoxStrategy and (confusingly!) PointVectorStrategy use legacy 
> numeric encoding
> --
>
> Key: LUCENE-7094
> URL: https://issues.apache.org/jira/browse/LUCENE-7094
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Nicholas Knize
>Priority: Blocker
> Fix For: master, 6.0
>
> Attachments: LUCENE-7094.patch, LUCENE-7094.patch, LUCENE_7094.patch, 
> LUCENE_7094.patch, LUCENE_7094.patch
>
>
> We need to deprecate these since they work on the old encoding and provide 
> points based alternatives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8922) DocSetCollector can allocate massive garbage on large indexes

2016-03-30 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219066#comment-15219066
 ] 

Jeff Wartes commented on SOLR-8922:
---

Not yet. The major risk area would be the new ExpandingIntArray class, but it 
looked reasonable. It expands along powers of two, and although the add() and 
copyTo() calls are certainly more work than simple array assignment/retrieval, 
it still all looks like pretty simple stuff. A few ArrayList calls and some 
simple numeric comparisons mostly. 
I'm more worried about bugs in there than performance, I don't know how well 
[~steff1193] tested this, although I got the impression he was using it in 
production at the time.

There may be better approaches, but this one was handy and I'm excited enough 
that I'm going to be doing a production test. I'll have more info in a day or 
two.

As a side note, I got a similar garbage-related improvement on an earlier test 
by simply hard-coding the smallSetSize to 10 - the expanding arrays 
approach only bought me another 3%. But of course, that 10 is very index 
and query set dependant, so I didn't want to offer it as a general case.

> DocSetCollector can allocate massive garbage on large indexes
> -
>
> Key: SOLR-8922
> URL: https://issues.apache.org/jira/browse/SOLR-8922
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jeff Wartes
> Attachments: SOLR-8922.patch
>
>
> After reaching a point of diminishing returns tuning the GC collector, I 
> decided to take a look at where the garbage was coming from. To my surprise, 
> it turned out that for my index and query set, almost 60% of the garbage was 
> coming from this single line:
> https://github.com/apache/lucene-solr/blob/94c04237cce44cac1e40e1b8b6ee6a6addc001a5/solr/core/src/java/org/apache/solr/search/DocSetCollector.java#L49
> This is due to the simple fact that I have 86M documents in my shards. 
> Allocating a scratch array big enough to track a result set 1/64th of my 
> index (1.3M) is also almost certainly excessive, considering my 99.9th 
> percentile hit count is less than 56k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7096) UninvertingReader needs multi-valued points support

2016-03-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219063#comment-15219063
 ] 

Robert Muir commented on LUCENE-7096:
-

exactly. I looked at this and I think its too trappy especially with multiple 
values, you gotta either use offlinesorter or a lot of RAM.

The issue is really unnecessary: if you are reindexing to change over to 
points, then you can add docvalues too while you are there, if you want to 
facet/sort. 

> UninvertingReader needs multi-valued points support
> ---
>
> Key: LUCENE-7096
> URL: https://issues.apache.org/jira/browse/LUCENE-7096
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7096.patch
>
>
> It now supports the single valued case (deprecating the legacy encoding), but 
> the multi-valued stuff does not yet have a replacement.
> ideally we add a FC.getSortedNumeric(Parser..) that works from points. Unlike 
> postings, points never lose frequency within a field, so its the best fit. 
> when getDocCount() == size(), the field is single-valued, so this should call 
> getNumeric and box that in SortedNumeric, similar to the String case.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7094) spatial-extras BBoxStrategy and (confusingly!) PointVectorStrategy use legacy numeric encoding

2016-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219059#comment-15219059
 ] 

ASF subversion and git services commented on LUCENE-7094:
-

Commit f9da8164483912cad40032387783f07e8c0cfc73 in lucene-solr's branch 
refs/heads/branch_6_0 from nknize
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f9da816 ]

* LUCENE-7094: BBoxStrategy and PointVectorStrategy now support PointValues (in 
addition to legacy numeric trie).  Their APIs were changed a little and also 
made more consistent.  PointValues/Trie is optional, DocValues is optional, 
stored value is optional.


> spatial-extras BBoxStrategy and (confusingly!) PointVectorStrategy use legacy 
> numeric encoding
> --
>
> Key: LUCENE-7094
> URL: https://issues.apache.org/jira/browse/LUCENE-7094
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Nicholas Knize
>Priority: Blocker
> Fix For: master, 6.0
>
> Attachments: LUCENE-7094.patch, LUCENE-7094.patch, LUCENE_7094.patch, 
> LUCENE_7094.patch, LUCENE_7094.patch
>
>
> We need to deprecate these since they work on the old encoding and provide 
> points based alternatives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7094) spatial-extras BBoxStrategy and (confusingly!) PointVectorStrategy use legacy numeric encoding

2016-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219045#comment-15219045
 ] 

ASF subversion and git services commented on LUCENE-7094:
-

Commit e1b45568b41bf67b48baae7b8fec5793300a6814 in lucene-solr's branch 
refs/heads/master from nknize
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e1b4556 ]

* LUCENE-7094: BBoxStrategy and PointVectorStrategy now support PointValues (in 
addition to legacy numeric trie).  Their APIs were changed a little and also 
made more consistent.  PointValues/Trie is optional, DocValues is optional, 
stored value is optional.


> spatial-extras BBoxStrategy and (confusingly!) PointVectorStrategy use legacy 
> numeric encoding
> --
>
> Key: LUCENE-7094
> URL: https://issues.apache.org/jira/browse/LUCENE-7094
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Nicholas Knize
>Priority: Blocker
> Fix For: master, 6.0
>
> Attachments: LUCENE-7094.patch, LUCENE-7094.patch, LUCENE_7094.patch, 
> LUCENE_7094.patch, LUCENE_7094.patch
>
>
> We need to deprecate these since they work on the old encoding and provide 
> points based alternatives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.0 Release

2016-03-30 Thread Adrien Grand
Le mer. 30 mars 2016 à 22:56, Shawn Heisey  a écrit :

> These are the choices I see to address the problem, in decreasing order
> of personal preference:
>
> 1) Revert LUCENE-6917 in the 6.x versions, move the deprecation to master.
> 2) Delay the Lucene/Solr 6.0.0 release so Solr has new field classes and
> updated examples.
> 3) Keep to the schedule for the Lucene 6.0.0 release, but do NOT release
> Solr 6.0.0.  Do a synchronized Lucene/Solr release of 6.0.1 or 6.1.0
> with new Solr classes and examples.
> 4) Move the deprecated Lucene classes to the Solr 7.0 package space
> (still deprecated) as suggested by Adrien.  Fully remove them in 8.0.
> 5) Compromise Solr's historical guarantees of major version backward
> compatibility.
>

I am confused why you put 1 before 4: to me they are the same from a Solr
perspective, and 4 is better than 1 from a Lucene perspective since it
makes the path forward clearer?

I think the only reasonable alternative to 4 is 2, which like you said
would be disappointing. I don't think anybody wants 5, and 3 feels awkward
to me. Detour: In the future I wonder that we should consider having
separate release cycles again. In addition to giving Solr more time to use
new Lucene features like here, it would also remove the issue that we had
when releasing 5.3.2 after 5.4.0, which makes perfectly sense from a Solr
perspective but not from Lucene since it introduces blind spots in the
testing of index backward compatibility.

Back to the current issue, my preference would go for 4. I could be wrong
but I think it is also consistent with the fact that Solr historically kept
compatibility for a longer time than Lucene (eg. by still supporting
IntField or allowing uninverting out of the box).


[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+111-patched) - Build # 302 - Failure!

2016-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/302/
Java: 64bit/jdk-9-ea+111-patched -XX:-UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

All tests passed

Build Log:
[...truncated 12394 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/temp/junit4-J0-20160330_221255_102.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7f7db19fef80, pid=8542, tid=8554
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (9.0+111) (build 
9-ea+111)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (9-ea+111, mixed mode, 
tiered, serial gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0xa23f80]  
MarkSweep::IsAliveClosure::do_object_b(oopDesc*)+0x0
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J0/hs_err_pid8542.log
   [junit4] Compiled method (c2) 1924758 40778   4   
org.apache.solr.client.solrj.impl.HttpSolrClient::close (22 bytes)
   [junit4]  total in heap  [0x7f7d9dae1f10,0x7f7d9daefaa8] = 56216
   [junit4]  relocation [0x7f7d9dae2050,0x7f7d9dae27b8] = 1896
   [junit4]  main code  [0x7f7d9dae27c0,0x7f7d9dae94a0] = 27872
   [junit4]  stub code  [0x7f7d9dae94a0,0x7f7d9dae97c0] = 800
   [junit4]  oops   [0x7f7d9dae97c0,0x7f7d9dae9800] = 64
   [junit4]  metadata   [0x7f7d9dae9800,0x7f7d9dae9ba8] = 936
   [junit4]  scopes data[0x7f7d9dae9ba8,0x7f7d9daedd10] = 16744
   [junit4]  scopes pcs [0x7f7d9daedd10,0x7f7d9daeefa0] = 4752
   [junit4]  dependencies   [0x7f7d9daeefa0,0x7f7d9daef0c8] = 296
   [junit4]  handler table  [0x7f7d9daef0c8,0x7f7d9daef848] = 1920
   [junit4]  nul chk table  [0x7f7d9daef848,0x7f7d9daefaa8] = 608
   [junit4] Compiled method (c2) 1924758 40778   4   
org.apache.solr.client.solrj.impl.HttpSolrClient::close (22 bytes)
   [junit4]  total in heap  [0x7f7d9dae1f10,0x7f7d9daefaa8] = 56216
   [junit4]  relocation [0x7f7d9dae2050,0x7f7d9dae27b8] = 1896
   [junit4]  main code  [0x7f7d9dae27c0,0x7f7d9dae94a0] = 27872
   [junit4]  stub code  [0x7f7d9dae94a0,0x7f7d9dae97c0] = 800
   [junit4]  oops   [0x7f7d9dae97c0,0x7f7d9dae9800] = 64
   [junit4]  metadata   [0x7f7d9dae9800,0x7f7d9dae9ba8] = 936
   [junit4]  scopes data[0x7f7d9dae9ba8,0x
   [junit4] 7f7d9daedd10] = 16744
   [junit4]  scopes pcs [0x7f7d9daedd10,0x7f7d9daeefa0] = 4752
   [junit4]  dependencies   [0x7f7d9daeefa0,0x7f7d9daef0c8] = 296
   [junit4]  handler table  [0x7f7d9daef0c8,0x7f7d9daef848] = 1920
   [junit4]  nul chk table  [0x7f7d9daef848,0x7f7d9daefaa8] = 608
   [junit4] Compiled method (c2) 1924761 40778   4   
org.apache.solr.client.solrj.impl.HttpSolrClient::close (22 bytes)
   [junit4]  total in heap  [0x7f7d9dae1f10,0x7f7d9daefaa8] = 56216
   [junit4]  relocation [0x7f7d9dae2050,0x7f7d9dae27b8] = 1896
   [junit4]  main code  [0x7f7d9dae27c0,0x7f7d9dae94a0] = 27872
   [junit4]  stub code  [0x7f7d9dae94a0,0x7f7d9dae97c0] = 800
   [junit4]  oops   [0x7f7d9dae97c0,0x7f7d9dae9800] = 64
   [junit4]  metadata   [0x7f7d9dae9800,0x7f7d9dae9ba8] = 936
   [junit4]  scopes data[0x7f7d9dae9ba8,0x7f7d9daedd10] = 16744
   [junit4]  scopes pcs [0x7f7d9daedd10,0x7f7d9daeefa0] = 4752
   [junit4]  dependencies   [0x7f7d9daeefa0,0x7f7d9daef0c8] = 296
   [junit4]  handler table  [0x7f7d9daef0c8,0x7f7d9daef848] = 1920
   [junit4]  nul chk table  [0x7f7d9daef848,0x7f7d9daefaa8] = 608
   [junit4] Compiled method (c2) 1924761 40778   4   
org.apache.solr.client.solrj.impl.HttpSolrClient::close (22 bytes)
   [junit4]  total in heap  [0x7f7d9dae1f10,0x7f7d9daefaa8] = 56216
   [junit4]  relocation [0x7f7d9dae2050,0x7f7d9dae27b8] = 1896
   [junit4]  main code  [0x7f7d9dae27c0,0x7f7d9dae94a0] = 27872
   [junit4]  stub code  [0x7f7d9dae94a0,0x7f7d9dae97c0] = 800
   [junit4]  oops   [0x7f7d9dae97c0,0x7f7d9dae9800] = 64
   [junit4]  metadata   [0x7f7d9dae9800,0x7f7d9dae9ba8] = 936
   [junit4]  scopes data[0x7f7d9dae9ba8,0x7f7d9daedd10] = 16744
   [junit4]  scopes pcs [0x7f7d9daedd10,0x7f7d9daeefa0] = 4752
   [junit4]  dependencies   [0x7f7d9daeefa0,0x7f7d9daef0c8] = 296
   

[jira] [Commented] (SOLR-8922) DocSetCollector can allocate massive garbage on large indexes

2016-03-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219003#comment-15219003
 ] 

Erick Erickson commented on SOLR-8922:
--

Jeff: 

Thanks for pointing that out, we need to keep from generating garbage whenever 
possible. 

One question I have is whether you've got any good stats on how performant this 
is?



> DocSetCollector can allocate massive garbage on large indexes
> -
>
> Key: SOLR-8922
> URL: https://issues.apache.org/jira/browse/SOLR-8922
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jeff Wartes
> Attachments: SOLR-8922.patch
>
>
> After reaching a point of diminishing returns tuning the GC collector, I 
> decided to take a look at where the garbage was coming from. To my surprise, 
> it turned out that for my index and query set, almost 60% of the garbage was 
> coming from this single line:
> https://github.com/apache/lucene-solr/blob/94c04237cce44cac1e40e1b8b6ee6a6addc001a5/solr/core/src/java/org/apache/solr/search/DocSetCollector.java#L49
> This is due to the simple fact that I have 86M documents in my shards. 
> Allocating a scratch array big enough to track a result set 1/64th of my 
> index (1.3M) is also almost certainly excessive, considering my 99.9th 
> percentile hit count is less than 56k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8901) Export data/tlog directory locations for every core in clusterstate

2016-03-30 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre resolved SOLR-8901.

Resolution: Duplicate

> Export data/tlog directory locations for every core in clusterstate
> ---
>
> Key: SOLR-8901
> URL: https://issues.apache.org/jira/browse/SOLR-8901
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>
> Currently the data and tlog directory path is not exposed as part of the 
> clusterstate.json. This information is important for implementing HDFS based 
> Solr snapshots. In case of HDFS based snapshots, the overseer will figure out 
> the correct HDFS path for the Solr collection and invoke HDFS API to capture 
> the snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 973 - Still Failing

2016-03-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/973/

5 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
distrib-dup-test-chain-explicit: doc#8 has wrong value for regex_dup_B_s 
expected: but was:

Stack Trace:
org.junit.ComparisonFailure: distrib-dup-test-chain-explicit: doc#8 has wrong 
value for regex_dup_B_s expected: but was:
at 
__randomizedtesting.SeedInfo.seed([8BDF4EE617D22F3:80E9CB34CF814F0B]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.solr.cloud.BasicDistributedZkTest.testUpdateProcessorsRunOnlyOnce(BasicDistributedZkTest.java:686)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:365)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-03-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218995#comment-15218995
 ] 

Shawn Heisey commented on LUCENE-6938:
--

Apologies for the incorrect credit.

I've never done it, but ReleaseTodo seems to indicate that running addVersion 
is done as one of the early steps in the process.

For major and minor releases, I think this makes sense, because you'll be 
making a new branch early in the process.  At that point the parent branch 
should get a version bump, and the subsequent release work will be done in the 
new branch, presumably with the correct version number already present.

For bugfix releases, I think it makes more sense to run addVersion just after 
tagging the release -- one of the *last* steps.  That would have prevented the 
problem I ran into.  I ran "ant package" on branch_5_5 some time after 5.5.0 
was fully released, but I got 5.5.0-SNAPSHOT filenames instead of 
5.5.1-SNAPSHOT.

Regarding the brand-new bug-fix Version entry being deprecated:  This makes no 
sense to me, especially since it causes an immediate test failure in the test 
that the addVersion script runs after making changes.  I can see with "git 
diff" that the script did correctly add deprecation annotations to LUCENE_5_5_0.

If the addVersion script were being used to add 5.5.1 to one of the 6x branches 
or the master branch, then it WOULD make sense for the new entry to be 
deprecated.  Perhaps I was not using the script correctly for my use case, or 
the script needs some detection code or the option you mentioned to skip 
deprecation.


> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master, 5.x
>
> Attachments: LUCENE-6938-1.patch, LUCENE-6938-wc-checker.patch, 
> LUCENE-6938-wc-checker.patch, LUCENE-6938.patch, LUCENE-6938.patch, 
> LUCENE-6938.patch, LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5750) Backup/Restore API for SolrCloud

2016-03-30 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218993#comment-15218993
 ] 

Hrishikesh Gadre commented on SOLR-5750:


[~varunthacker] Is there a reason why we have introduced a core Admin API for 
Backup/restore instead of reusing the replication handler?

> Backup/Restore API for SolrCloud
> 
>
> Key: SOLR-5750
> URL: https://issues.apache.org/jira/browse/SOLR-5750
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Varun Thacker
> Fix For: 5.2, master
>
> Attachments: SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch, 
> SOLR-5750.patch, SOLR-5750.patch, SOLR-5750.patch
>
>
> We should have an easy way to do backups and restores in SolrCloud. The 
> ReplicationHandler supports a backup command which can create snapshots of 
> the index but that is too little.
> The command should be able to backup:
> # Snapshots of all indexes or indexes from the leader or the shards
> # Config set
> # Cluster state
> # Cluster properties
> # Aliases
> # Overseer work queue?
> A restore should be able to completely restore the cloud i.e. no manual steps 
> required other than bringing nodes back up or setting up a new cloud cluster.
> SOLR-5340 will be a part of this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8922) DocSetCollector can allocate massive garbage on large indexes

2016-03-30 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218974#comment-15218974
 ] 

Jeff Wartes commented on SOLR-8922:
---

For my index, (86M-doc shards and a per-shard 99.9th percentile query hit count 
of 56k) this reduced total garbage generation by 33%, which naturally also 
brought significant improvements in gc pause and frequency.


> DocSetCollector can allocate massive garbage on large indexes
> -
>
> Key: SOLR-8922
> URL: https://issues.apache.org/jira/browse/SOLR-8922
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jeff Wartes
> Attachments: SOLR-8922.patch
>
>
> After reaching a point of diminishing returns tuning the GC collector, I 
> decided to take a look at where the garbage was coming from. To my surprise, 
> it turned out that for my index and query set, almost 60% of the garbage was 
> coming from this single line:
> https://github.com/apache/lucene-solr/blob/94c04237cce44cac1e40e1b8b6ee6a6addc001a5/solr/core/src/java/org/apache/solr/search/DocSetCollector.java#L49
> This is due to the simple fact that I have 86M documents in my shards. 
> Allocating a scratch array big enough to track a result set 1/64th of my 
> index (1.3M) is also almost certainly excessive, considering my 99.9th 
> percentile hit count is less than 56k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8922) DocSetCollector can allocate massive garbage on large indexes

2016-03-30 Thread Jeff Wartes (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Wartes updated SOLR-8922:
--
Attachment: SOLR-8922.patch

This is essentially the same patch as in SOLR-5444, but applies cleanly against 
(at least) 5.4 where I did some GC testing, and master.

> DocSetCollector can allocate massive garbage on large indexes
> -
>
> Key: SOLR-8922
> URL: https://issues.apache.org/jira/browse/SOLR-8922
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jeff Wartes
> Attachments: SOLR-8922.patch
>
>
> After reaching a point of diminishing returns tuning the GC collector, I 
> decided to take a look at where the garbage was coming from. To my surprise, 
> it turned out that for my index and query set, almost 60% of the garbage was 
> coming from this single line:
> https://github.com/apache/lucene-solr/blob/94c04237cce44cac1e40e1b8b6ee6a6addc001a5/solr/core/src/java/org/apache/solr/search/DocSetCollector.java#L49
> This is due to the simple fact that I have 86M documents in my shards. 
> Allocating a scratch array big enough to track a result set 1/64th of my 
> index (1.3M) is also almost certainly excessive, considering my 99.9th 
> percentile hit count is less than 56k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8914) ZkStateReader's refreshLiveNodes(Watcher) is not thread safe

2016-03-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218949#comment-15218949
 ] 

Steve Rowe commented on SOLR-8914:
--

With the full patch 0/100 iterations failed.  I'll beast another 100 iterations.

> ZkStateReader's refreshLiveNodes(Watcher) is not thread safe
> 
>
> Key: SOLR-8914
> URL: https://issues.apache.org/jira/browse/SOLR-8914
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8914.patch, SOLR-8914.patch, SOLR-8914.patch, 
> SOLR-8914.patch, jenkins.thetaphi.de_Lucene-Solr-6.x-Solaris_32.log.txt, 
> live_node_mentions_port56361_with_threadIds.log.txt, 
> live_nodes_mentions.log.txt
>
>
> Jenkin's encountered a failure in TestTolerantUpdateProcessorCloud over the 
> weekend
> {noformat}
> http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/32/consoleText
> Checking out Revision c46d7686643e7503304cb35dfe546bce9c6684e7 
> (refs/remotes/origin/branch_6x)
> Using Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC
> {noformat}
> The failure happened during the static setup of the test, when a 
> MiniSolrCloudCluster & several clients are initialized -- before any code 
> related to TolerantUpdateProcessor is ever used.
> I can't reproduce this, or really make sense of what i'm (not) seeing here in 
> the logs, so i'm filing this jira with my analysis in the hopes that someone 
> else can help make sense of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-03-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218913#comment-15218913
 ] 

Jan Høydahl commented on LUCENE-6938:
-

It was I who cherry-picked Mike's fix to the 5_5 branch, but the git mail bot 
logs author, not committer :)
I also saw the same regarding deprecation. Had to remove the deprecations 
manually, then tests pass. Should the script have a switch for skipping 
deprecation?
A bit unclear to me after reading RM docs: Should bugfix version bump be 
performed by RM after releasing previous minor release, or by the be done by 
the RM for the bugfix release just prior to release?

> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master, 5.x
>
> Attachments: LUCENE-6938-1.patch, LUCENE-6938-wc-checker.patch, 
> LUCENE-6938-wc-checker.patch, LUCENE-6938.patch, LUCENE-6938.patch, 
> LUCENE-6938.patch, LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8922) DocSetCollector can allocate massive garbage on large indexes

2016-03-30 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218910#comment-15218910
 ] 

Jeff Wartes commented on SOLR-8922:
---

SOLR-5444 had a patch to help with this, 
(SOLR-5444_ExpandingIntArray_DocSetCollector_4_4_0.patch) but it was mixed in 
with some other things, and didn't get picked up with the other parts of the 
issue.

> DocSetCollector can allocate massive garbage on large indexes
> -
>
> Key: SOLR-8922
> URL: https://issues.apache.org/jira/browse/SOLR-8922
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jeff Wartes
>
> After reaching a point of diminishing returns tuning the GC collector, I 
> decided to take a look at where the garbage was coming from. To my surprise, 
> it turned out that for my index and query set, almost 60% of the garbage was 
> coming from this single line:
> https://github.com/apache/lucene-solr/blob/94c04237cce44cac1e40e1b8b6ee6a6addc001a5/solr/core/src/java/org/apache/solr/search/DocSetCollector.java#L49
> This is due to the simple fact that I have 86M documents in my shards. 
> Allocating a scratch array big enough to track a result set 1/64th of my 
> index (1.3M) is also almost certainly excessive, considering my 99.9th 
> percentile hit count is less than 56k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8922) DocSetCollector can allocate massive garbage on large indexes

2016-03-30 Thread Jeff Wartes (JIRA)
Jeff Wartes created SOLR-8922:
-

 Summary: DocSetCollector can allocate massive garbage on large 
indexes
 Key: SOLR-8922
 URL: https://issues.apache.org/jira/browse/SOLR-8922
 Project: Solr
  Issue Type: Improvement
Reporter: Jeff Wartes


After reaching a point of diminishing returns tuning the GC collector, I 
decided to take a look at where the garbage was coming from. To my surprise, it 
turned out that for my index and query set, almost 60% of the garbage was 
coming from this single line:

https://github.com/apache/lucene-solr/blob/94c04237cce44cac1e40e1b8b6ee6a6addc001a5/solr/core/src/java/org/apache/solr/search/DocSetCollector.java#L49

This is due to the simple fact that I have 86M documents in my shards. 
Allocating a scratch array big enough to track a result set 1/64th of my index 
(1.3M) is also almost certainly excessive, considering my 99.9th percentile hit 
count is less than 56k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7094) spatial-extras BBoxStrategy and (confusingly!) PointVectorStrategy use legacy numeric encoding

2016-03-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218895#comment-15218895
 ] 

David Smiley commented on LUCENE-7094:
--

Great; who shall commit?  I will tonight (~4 more hours) if I don't hear 
otherwise.  Suggested CHANGES.txt:

* LUCENE-7094: BBoxStrategy and PointVectorStrategy now support PointValues (in 
addition to legacy numeric trie).  Their APIs were changed a little and also 
made more consistent.  PointValues/Trie is optional, DocValues is optional, 
stored value is optional.  (Nick Knize, David Smiley)

> spatial-extras BBoxStrategy and (confusingly!) PointVectorStrategy use legacy 
> numeric encoding
> --
>
> Key: LUCENE-7094
> URL: https://issues.apache.org/jira/browse/LUCENE-7094
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Nicholas Knize
>Priority: Blocker
> Fix For: master, 6.0
>
> Attachments: LUCENE-7094.patch, LUCENE-7094.patch, LUCENE_7094.patch, 
> LUCENE_7094.patch, LUCENE_7094.patch
>
>
> We need to deprecate these since they work on the old encoding and provide 
> points based alternatives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8812) ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND

2016-03-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218894#comment-15218894
 ] 

Jan Høydahl commented on SOLR-8812:
---

Perhaps, after reverting, we could spend some effort adding better test 
coverage to edismax, to prevent similar regressions in the future.
Then also add a bunch more tests for SOLR-2649, defining wanted behavior for 
all kind of corner cases.

> ExtendedDismaxQParser (edismax) ignores Boolean OR when q.op=AND
> 
>
> Key: SOLR-8812
> URL: https://issues.apache.org/jira/browse/SOLR-8812
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 5.5
>Reporter: Ryan Steinberg
>Assignee: Erick Erickson
>Priority: Blocker
> Fix For: 6.0, 5.5.1
>
> Attachments: SOLR-8812.patch
>
>
> The edismax parser ignores Boolean OR in queries when q.op=AND. This behavior 
> is new to Solr 5.5.0 and an unexpected major change.
> Example:
>   "q": "id:12345 OR zz",
>   "defType": "edismax",
>   "q.op": "AND",
> where "12345" is a known document ID and "zz" is a string NOT present 
> in my data
> Version 5.5.0 produces zero results:
> "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+((id:12345 
> DisjunctionMaxQuery((text:zz)))~2))/no_coord",
> "parsedquery_toString": "+((id:12345 (text:zz))~2)",
> "explain": {},
> "QParser": "ExtendedDismaxQParser"
> Version 5.4.0 produces one result as expected
>   "rawquerystring": "id:12345 OR zz",
> "querystring": "id:12345 OR zz",
> "parsedquery": "(+(id:12345 
> DisjunctionMaxQuery((text:zz/no_coord",
> "parsedquery_toString": "+(id:12345 (text:zz))"
> "explain": {},
> "QParser": "ExtendedDismaxQParser"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.0 Release

2016-03-30 Thread Jack Krupansky
Does Solr need its own MIGRATE.txt file? Maybe not for 6.0, but probably
for 6.x when any Solr field type changes occur.

Right now, MIGRATE.txt belongs to the lucene package:
https://github.com/apache/lucene-solr/blob/branch_6_0/lucene/MIGRATE.txt

The migration comments related to numeric fields don't really say anything
related to Solr users directly:

"# PointValues replaces NumericField (LUCENE-6917)

PointValues provides faster indexing and searching, a smaller
index size, and less heap used at search time. See
org.apache.lucene.index.PointValues
for an introduction.

Legacy numeric encodings from previous versions of Lucene are
deprecated as LegacyIntField, LegacyFloatField, LegacyLongField, and
LegacyDoubleField,
and can be searched with LegacyNumericRangeQuery."

I mean, Solr users deal with solr.TrieIntField, et al, which are Solr
classes that internally map to Lucene field classes.

Since the 6.0 Solr code has already been "migrated" to use the
LegacyTrieXxxField types, which should be fully compatible with 5.x, Solr
should initially be fine with 6.0.

See the createField method of Solr's TrieField:
https://github.com/apache/lucene-solr/blob/branch_6_0/solr/core/src/java/org/apache/solr/schema/TrieField.java


-- Jack Krupansky

On Wed, Mar 30, 2016 at 4:56 PM, Shawn Heisey  wrote:

> On 3/30/2016 10:43 AM, Adrien Grand wrote:
> > I think marking the legacy fields/queries as deprecated in Lucene in
> > 6.0 is the right thing to do in order to encourage users to migrate to
> > the new points API. If Solr needs to keep them around for 7.x, it
> > would be fine to move them to solr/ instead of lucene/ instead of a
> > hard removal. Given that it works on top of the postings API, it would
> > not break.
>
> Encouraging users to migrate to better APIs is a great idea.
>
> I'm sure we'll update Solr's examples to use new classes that leverage
> the points API, but at the moment, we do not have time before the 6.0.0
> release to do this.  This puts Solr in an awkward position.
>
> These are the choices I see to address the problem, in decreasing order
> of personal preference:
>
> 1) Revert LUCENE-6917 in the 6.x versions, move the deprecation to master.
> 2) Delay the Lucene/Solr 6.0.0 release so Solr has new field classes and
> updated examples.
> 3) Keep to the schedule for the Lucene 6.0.0 release, but do NOT release
> Solr 6.0.0.  Do a synchronized Lucene/Solr release of 6.0.1 or 6.1.0
> with new Solr classes and examples.
> 4) Move the deprecated Lucene classes to the Solr 7.0 package space
> (still deprecated) as suggested by Adrien.  Fully remove them in 8.0.
> 5) Compromise Solr's historical guarantees of major version backward
> compatibility.
>
> Option 1 should not be a major hardship for Lucene.  Users can still use
> the new API, and such use can be encouraged by examples, documentation,
> and community activity.  Version 7.0 would provide further
> encouragement, and I don't see any reason we can't work towards a quick
> 7.0 release and a relatively short lifetime for 6.x.  5.x had a much
> shorter lifetime than 4.x did.
>
> Option 2 would be disappointing for everyone, and even though 6.0 would
> probably STILL be the fastest major release in recent project history
> even with this delay, I am expecting significant pressure against it.
>
> Option 3 would be a major disappointment for Solr, but I think it would
> be better for the integrity of the project than our current trajectory.
>
> Option 4 is more than a little ugly, which is why it's in 4th place.
> Assuming the postings API would work correctly after the package move,
> it would fix the problem.
>
> I do not like option 5 at all.
>
> This community works by consensus, so my personal preference on the
> options above is not worth very much.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (LUCENE-7152) Refactor lucene-spatial GeoUtils to core

2016-03-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218875#comment-15218875
 ] 

David Smiley commented on LUCENE-7152:
--

-1 sorry.  I've been following LUCENE-7150 and it's unclear to me why GeoUtils 
should be added to core.  It seems reasonable that the spatial3d module have a 
dependency on the spatial module.  Likewise I expect spatial-extras will depend 
on the spatial module in the future too when a GeoPointSpatialStrategy gets 
written.  If in the future for reasons we might not forsee today we would like 
to use GeoUtils in non-spatial modules (perhaps expressions?), then I could 
understand putting it in core.  I just think it's better organized to keep 
spatial code in our spatial modules.

> Refactor lucene-spatial GeoUtils to core
> 
>
> Key: LUCENE-7152
> URL: https://issues.apache.org/jira/browse/LUCENE-7152
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nicholas Knize
>
> {{GeoUtils}} contains a lot of common spatial mathematics that can be reused 
> across multiple packages. As discussed in LUCENE-7150 this issue will 
> refactor GeoUtils to a new {{o.a.l.util.geo}} package in core that can be the 
> home for other reusable spatial utility classes required by field and query 
> implementations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7362) TestReqParamsAPI failing in jenkins

2016-03-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218854#comment-15218854
 ] 

David Smiley commented on SOLR-7362:


Try running from within IntelliJ maybe?  It's more reproducible from me when I 
do that (who knows why).  And I have seen this on master too... but not 6.0.

> TestReqParamsAPI failing in jenkins
> ---
>
> Key: SOLR-7362
> URL: https://issues.apache.org/jira/browse/SOLR-7362
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> {noformat}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4645/
> Java: 32bit/jdk1.8.0_40 -server -XX:+UseSerialGC
> 1 tests failed.
> FAILED:  org.apache.solr.handler.TestReqParamsAPI.test
> Error Message:
> Could not get expected value  'null' for path 'response/params/y/p' full 
> output: {   "responseHeader":{ "status":0, "QTime":1},   "response":{ 
> "znodeVersion":3, "params":{   "x":{ "a":"A val", 
> "b":"B val", "":{"v":0}},   "y":{ "p":"P val", 
> "q":"Q val", "":{"v":0}
> Stack Trace:
> java.lang.AssertionError: Could not get expected value  'null' for path 
> 'response/params/y/p' full output: {
>   "responseHeader":{
> "status":0,
> "QTime":1},
>   "response":{
> "znodeVersion":3,
> "params":{
>   "x":{
> "a":"A val",
> "b":"B val",
> "":{"v":0}},
>   "y":{
> "p":"P val",
> "q":"Q val",
> "":{"v":0}
> at 
> __randomizedtesting.SeedInfo.seed([D0DB18ECE165C505:588F27364F99A8FD]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at 
> org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:405)
> at 
> org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:236)
> at 
> org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:71)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8851) ClassCastException in SearchHandler

2016-03-30 Thread Marius Grama (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218651#comment-15218651
 ] 

Marius Grama edited comment on SOLR-8851 at 3/30/16 8:57 PM:
-

Issue can be reproduced (quite often) on the _techproducts_  example collection 
with the following query :

{noformat}
http://localhost:8983/solr/techproducts/select?q=*:*=json=true=true={!tag=q1}manufacturedate_dt:[2006-01-01T00:00:00Z%20TO%20NOW]={!tag=q1}price:[0%20TO%20100]%20=1
{noformat}

Moreover I notice now that also for the terms query : 

{noformat}
http://localhost:8983/solr/techproducts/terms?terms.fl=name=1
{noformat}

the _result_ element is delivered only (even thought it is not needed) when the 
query times out : 

{noformat}

   
 true
 0
 8762
  
  

  
  

{noformat}

I am further investigating whether SearchHandler lines 294-303 can be avoided : 
{code}
SolrDocumentList r = (SolrDocumentList) 
rb.rsp.getValues().get("response");
if(r == null)
  r = new SolrDocumentList();
r.setNumFound(0);
rb.rsp.add("response", r);
if(rb.isDebug()) {
  NamedList debug = new NamedList();
  debug.add("explain", new NamedList());
  rb.rsp.add("debug", debug);
}
{code}


was (Author: mariusneo):
Issue can be reproduced (quite often) on the _techproducts_  example collection 
with the following query :

{noformat}
http://localhost:8983/solr/techproducts/select?q=*:*=json=true=true={!tag=q1}manufacturedate_dt:[2006-01-01T00:00:00Z%20TO%20NOW]={!tag=q1}price:[0%20TO%20100]%20=1
{noformat}

Moreover I notice now that also for the terms query : 

{noformat}
http://localhost:8983/solr/techproducts/terms?terms.fl=name=1
{noformat}

the _result_ element is delivered only when the query times out : 

{noformat}

   
 true
 0
 8762
  
  

  
  

{noformat}

I am further investigating whether SearchHandler lines 294-303 can be avoided : 
{code}
SolrDocumentList r = (SolrDocumentList) 
rb.rsp.getValues().get("response");
if(r == null)
  r = new SolrDocumentList();
r.setNumFound(0);
rb.rsp.add("response", r);
if(rb.isDebug()) {
  NamedList debug = new NamedList();
  debug.add("explain", new NamedList());
  rb.rsp.add("debug", debug);
}
{code}

> ClassCastException in SearchHandler
> ---
>
> Key: SOLR-8851
> URL: https://issues.apache.org/jira/browse/SOLR-8851
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Pascal Chollet
>
> When there is a query timeout in non-distrub mode, {{SearchHandler}} is 
> throwing a {{ClassCastException}}:
> {code}java.lang.ClassCastException: org.apache.solr.response.ResultContext 
> cannot be cast to org.apache.solr.common.SolrDocumentList
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>   ...{code}
> The problem can be reproduced if any component running after 
> {{QueryComponent}} times out - in our case it is {{FacetComponent}} which 
> throws a {{ExitingReaderException}}.
> {{SearchHandler:293}} expects a {{SolrDocumentList}} in {{rsp.response}}, but 
> {{QueryComponent}} did add a {{ResultContext}} instead.
> It looks like this is not a problem, if the {{QueryComponent}} itself is 
> timing out, as rsp.response is null in that case. It's only a problem if a 
> component after {{QueryComponent}} is timing out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.0 Release

2016-03-30 Thread Shawn Heisey
On 3/30/2016 10:43 AM, Adrien Grand wrote:
> I think marking the legacy fields/queries as deprecated in Lucene in
> 6.0 is the right thing to do in order to encourage users to migrate to
> the new points API. If Solr needs to keep them around for 7.x, it
> would be fine to move them to solr/ instead of lucene/ instead of a
> hard removal. Given that it works on top of the postings API, it would
> not break.

Encouraging users to migrate to better APIs is a great idea.

I'm sure we'll update Solr's examples to use new classes that leverage
the points API, but at the moment, we do not have time before the 6.0.0
release to do this.  This puts Solr in an awkward position.

These are the choices I see to address the problem, in decreasing order
of personal preference:

1) Revert LUCENE-6917 in the 6.x versions, move the deprecation to master.
2) Delay the Lucene/Solr 6.0.0 release so Solr has new field classes and
updated examples.
3) Keep to the schedule for the Lucene 6.0.0 release, but do NOT release
Solr 6.0.0.  Do a synchronized Lucene/Solr release of 6.0.1 or 6.1.0
with new Solr classes and examples.
4) Move the deprecated Lucene classes to the Solr 7.0 package space
(still deprecated) as suggested by Adrien.  Fully remove them in 8.0.
5) Compromise Solr's historical guarantees of major version backward
compatibility.

Option 1 should not be a major hardship for Lucene.  Users can still use
the new API, and such use can be encouraged by examples, documentation,
and community activity.  Version 7.0 would provide further
encouragement, and I don't see any reason we can't work towards a quick
7.0 release and a relatively short lifetime for 6.x.  5.x had a much
shorter lifetime than 4.x did.

Option 2 would be disappointing for everyone, and even though 6.0 would
probably STILL be the fastest major release in recent project history
even with this delay, I am expecting significant pressure against it.

Option 3 would be a major disappointment for Solr, but I think it would
be better for the integrity of the project than our current trajectory.

Option 4 is more than a little ugly, which is why it's in 4th place. 
Assuming the postings API would work correctly after the package move,
it would fix the problem.

I do not like option 5 at all.

This community works by consensus, so my personal preference on the
options above is not worth very much.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8851) ClassCastException in SearchHandler

2016-03-30 Thread Marius Grama (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218651#comment-15218651
 ] 

Marius Grama edited comment on SOLR-8851 at 3/30/16 8:55 PM:
-

Issue can be reproduced (quite often) on the _techproducts_  example collection 
with the following query :

{noformat}
http://localhost:8983/solr/techproducts/select?q=*:*=json=true=true={!tag=q1}manufacturedate_dt:[2006-01-01T00:00:00Z%20TO%20NOW]={!tag=q1}price:[0%20TO%20100]%20=1
{noformat}

Moreover I notice now that also for the terms query : 

{noformat}
http://localhost:8983/solr/techproducts/terms?terms.fl=name=1
{noformat}

the _result_ element is delivered only when the query times out : 

{noformat}

   
 true
 0
 8762
  
  

  
  

{noformat}

I am further investigating whether SearchHandler lines 294-303 can be avoided : 
{code}
SolrDocumentList r = (SolrDocumentList) 
rb.rsp.getValues().get("response");
if(r == null)
  r = new SolrDocumentList();
r.setNumFound(0);
rb.rsp.add("response", r);
if(rb.isDebug()) {
  NamedList debug = new NamedList();
  debug.add("explain", new NamedList());
  rb.rsp.add("debug", debug);
}
{code}


was (Author: mariusneo):
Issue can be reproduced (quite often) on the _techproducts_  example collection 
with the following query :

{noformat}
http://localhost:8983/solr/techproducts/select?q=*:*=json=true=true={!tag=q1}manufacturedate_dt:[2006-01-01T00:00:00Z%20TO%20NOW]={!tag=q1}price:[0%20TO%20100]%20=1
{noformat}

> ClassCastException in SearchHandler
> ---
>
> Key: SOLR-8851
> URL: https://issues.apache.org/jira/browse/SOLR-8851
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Pascal Chollet
>
> When there is a query timeout in non-distrub mode, {{SearchHandler}} is 
> throwing a {{ClassCastException}}:
> {code}java.lang.ClassCastException: org.apache.solr.response.ResultContext 
> cannot be cast to org.apache.solr.common.SolrDocumentList
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>   ...{code}
> The problem can be reproduced if any component running after 
> {{QueryComponent}} times out - in our case it is {{FacetComponent}} which 
> throws a {{ExitingReaderException}}.
> {{SearchHandler:293}} expects a {{SolrDocumentList}} in {{rsp.response}}, but 
> {{QueryComponent}} did add a {{ResultContext}} instead.
> It looks like this is not a problem, if the {{QueryComponent}} itself is 
> timing out, as rsp.response is null in that case. It's only a problem if a 
> component after {{QueryComponent}} is timing out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7096) UninvertingReader needs multi-valued points support

2016-03-30 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218797#comment-15218797
 ] 

Uwe Schindler commented on LUCENE-7096:
---

Users must enable DocValues for facetting or sorting. As point values are a new 
field type in Solr, you can document that those fields must have DocValues to 
support facetting and sorting.

> UninvertingReader needs multi-valued points support
> ---
>
> Key: LUCENE-7096
> URL: https://issues.apache.org/jira/browse/LUCENE-7096
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7096.patch
>
>
> It now supports the single valued case (deprecating the legacy encoding), but 
> the multi-valued stuff does not yet have a replacement.
> ideally we add a FC.getSortedNumeric(Parser..) that works from points. Unlike 
> postings, points never lose frequency within a field, so its the best fit. 
> when getDocCount() == size(), the field is single-valued, so this should call 
> getNumeric and box that in SortedNumeric, similar to the String case.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7150) geo3d public APIs should match the 2D apis?

2016-03-30 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218802#comment-15218802
 ] 

Karl Wright commented on LUCENE-7150:
-

If they are 2D linear interpolations, then it is not unreasonable for there to 
be a hit count difference between true 3D polygons and linearly interpolated 
polygons.


> geo3d public APIs should match the 2D apis?
> ---
>
> Key: LUCENE-7150
> URL: https://issues.apache.org/jira/browse/LUCENE-7150
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7150-sphere.patch, LUCENE-7150.patch, 
> LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch
>
>
> I'm struggling to benchmark the equivalent to 
> {{LatLonPoint.newDistanceQuery}} in the geo3d world.
> Ideally, I think we'd have a {{Geo3DPoint.newDistanceQuery}}?  And it would 
> take degrees, not radians, and radiusMeters, not an angle?
> And if I index and search using {{PlanetModel.SPHERE}} I think it should 
> ideally give the same results as {{LatLonPoint.newDistanceQuery}}, which uses 
> haversin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3177 - Failure!

2016-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3177/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:62265/collection1: Async exception during 
distributed update: Bad Requestrequest: 
http://127.0.0.1:62274/collection1/update?update.chain=distrib-dup-test-chain-explicit=TOLEADER=http%3A%2F%2F127.0.0.1%3A62265%2Fcollection1%2F=javabin=2

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:62265/collection1: Async exception during 
distributed update: Bad Request



request: 
http://127.0.0.1:62274/collection1/update?update.chain=distrib-dup-test-chain-explicit=TOLEADER=http%3A%2F%2F127.0.0.1%3A62265%2Fcollection1%2F=javabin=2
at 
__randomizedtesting.SeedInfo.seed([EEB7F563A1BD5614:66E3CAB90F413BEC]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:577)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.BaseDistributedSearchTestCase.add(BaseDistributedSearchTestCase.java:514)
at 
org.apache.solr.cloud.BasicDistributedZkTest.testUpdateProcessorsRunOnlyOnce(BasicDistributedZkTest.java:669)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:365)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (LUCENE-7150) geo3d public APIs should match the 2D apis?

2016-03-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218790#comment-15218790
 ] 

Michael McCandless commented on LUCENE-7150:


I would assume they are a 2D interpolation?  They operate in lat/lon space?

> geo3d public APIs should match the 2D apis?
> ---
>
> Key: LUCENE-7150
> URL: https://issues.apache.org/jira/browse/LUCENE-7150
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7150-sphere.patch, LUCENE-7150.patch, 
> LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch
>
>
> I'm struggling to benchmark the equivalent to 
> {{LatLonPoint.newDistanceQuery}} in the geo3d world.
> Ideally, I think we'd have a {{Geo3DPoint.newDistanceQuery}}?  And it would 
> take degrees, not radians, and radiusMeters, not an angle?
> And if I index and search using {{PlanetModel.SPHERE}} I think it should 
> ideally give the same results as {{LatLonPoint.newDistanceQuery}}, which uses 
> haversin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7096) UninvertingReader needs multi-valued points support

2016-03-30 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218778#comment-15218778
 ] 

Ishan Chattopadhyaya edited comment on LUCENE-7096 at 3/30/16 8:24 PM:
---

bq. Maybe we should revert the single-valued points support as well?
[~rcmuir] & [~mikemccand], do you have some thoughts, please, on how we could 
leverage points fields in Solr if single valued and multi valued support is not 
available through UninvertingReader? (fyi, SOLR-8396)


was (Author: ichattopadhyaya):
bq. Maybe we should revert the single-valued points support as well?
[~rcmuir] & [~mikemccand], do you have some thoughts, please, on how we could 
leverage points fields in Solr if single valued and multi valued support is not 
available through UninvertingReader?

> UninvertingReader needs multi-valued points support
> ---
>
> Key: LUCENE-7096
> URL: https://issues.apache.org/jira/browse/LUCENE-7096
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7096.patch
>
>
> It now supports the single valued case (deprecating the legacy encoding), but 
> the multi-valued stuff does not yet have a replacement.
> ideally we add a FC.getSortedNumeric(Parser..) that works from points. Unlike 
> postings, points never lose frequency within a field, so its the best fit. 
> when getDocCount() == size(), the field is single-valued, so this should call 
> getNumeric and box that in SortedNumeric, similar to the String case.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7096) UninvertingReader needs multi-valued points support

2016-03-30 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218778#comment-15218778
 ] 

Ishan Chattopadhyaya commented on LUCENE-7096:
--

bq. Maybe we should revert the single-valued points support as well?
[~rcmuir] & [~mikemccand], do you have some thoughts, please, on how we could 
leverage points fields in Solr if single valued and multi valued support is not 
available through UninvertingReader?

> UninvertingReader needs multi-valued points support
> ---
>
> Key: LUCENE-7096
> URL: https://issues.apache.org/jira/browse/LUCENE-7096
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-7096.patch
>
>
> It now supports the single valued case (deprecating the legacy encoding), but 
> the multi-valued stuff does not yet have a replacement.
> ideally we add a FC.getSortedNumeric(Parser..) that works from points. Unlike 
> postings, points never lose frequency within a field, so its the best fit. 
> when getDocCount() == size(), the field is single-valued, so this should call 
> getNumeric and box that in SortedNumeric, similar to the String case.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8396) Investigate PointField to replace NumericField types

2016-03-30 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218757#comment-15218757
 ] 

Ishan Chattopadhyaya commented on SOLR-8396:


I'm assuming weeks. There are some issues that we need to figure out how to get 
around, e.g. LUCENE-7096, LUCENE-7086.
If the patch in LUCENE-7096 is committed, the patch here will be broken due to 
{{Maybe we should revert the single-valued points support as well?}}. I am 
thinking on ways to proceed if/once that happens.

> Investigate PointField to replace NumericField types
> 
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8396.patch, SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7362) TestReqParamsAPI failing in jenkins

2016-03-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218737#comment-15218737
 ] 

Noble Paul commented on SOLR-7362:
--

I just beasted it with the same seed fr 15 iters and it is not failing . Will 
try it later

> TestReqParamsAPI failing in jenkins
> ---
>
> Key: SOLR-7362
> URL: https://issues.apache.org/jira/browse/SOLR-7362
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> {noformat}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4645/
> Java: 32bit/jdk1.8.0_40 -server -XX:+UseSerialGC
> 1 tests failed.
> FAILED:  org.apache.solr.handler.TestReqParamsAPI.test
> Error Message:
> Could not get expected value  'null' for path 'response/params/y/p' full 
> output: {   "responseHeader":{ "status":0, "QTime":1},   "response":{ 
> "znodeVersion":3, "params":{   "x":{ "a":"A val", 
> "b":"B val", "":{"v":0}},   "y":{ "p":"P val", 
> "q":"Q val", "":{"v":0}
> Stack Trace:
> java.lang.AssertionError: Could not get expected value  'null' for path 
> 'response/params/y/p' full output: {
>   "responseHeader":{
> "status":0,
> "QTime":1},
>   "response":{
> "znodeVersion":3,
> "params":{
>   "x":{
> "a":"A val",
> "b":"B val",
> "":{"v":0}},
>   "y":{
> "p":"P val",
> "q":"Q val",
> "":{"v":0}
> at 
> __randomizedtesting.SeedInfo.seed([D0DB18ECE165C505:588F27364F99A8FD]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at 
> org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:405)
> at 
> org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:236)
> at 
> org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:71)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8396) Investigate PointField to replace NumericField types

2016-03-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218717#comment-15218717
 ] 

Shawn Heisey commented on SOLR-8396:


I'm curious -- what kind of timeframe would be required to create new numeric 
classes, update the examples so the primary fieldTypes are no longer using the 
deprecated code, and run enough tests to be sure it's all solid?  Is it days, 
or weeks?

> Investigate PointField to replace NumericField types
> 
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8396.patch, SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8914) ZkStateReader's refreshLiveNodes(Watcher) is not thread safe

2016-03-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218692#comment-15218692
 ] 

Steve Rowe commented on SOLR-8914:
--

TestStressLiveNodes w/o the rest of the patch failed 84/100 iterations on my 
server.

> ZkStateReader's refreshLiveNodes(Watcher) is not thread safe
> 
>
> Key: SOLR-8914
> URL: https://issues.apache.org/jira/browse/SOLR-8914
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8914.patch, SOLR-8914.patch, SOLR-8914.patch, 
> SOLR-8914.patch, jenkins.thetaphi.de_Lucene-Solr-6.x-Solaris_32.log.txt, 
> live_node_mentions_port56361_with_threadIds.log.txt, 
> live_nodes_mentions.log.txt
>
>
> Jenkin's encountered a failure in TestTolerantUpdateProcessorCloud over the 
> weekend
> {noformat}
> http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/32/consoleText
> Checking out Revision c46d7686643e7503304cb35dfe546bce9c6684e7 
> (refs/remotes/origin/branch_6x)
> Using Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC
> {noformat}
> The failure happened during the static setup of the test, when a 
> MiniSolrCloudCluster & several clients are initialized -- before any code 
> related to TolerantUpdateProcessor is ever used.
> I can't reproduce this, or really make sense of what i'm (not) seeing here in 
> the logs, so i'm filing this jira with my analysis in the hopes that someone 
> else can help make sense of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+111-patched) - Build # 16385 - Failure!

2016-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16385/
Java: 32bit/jdk-9-ea+111-patched -server -XX:+UseG1GC -XX:-CompactStrings

All tests passed

Build Log:
[...truncated 28 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: 
org.eclipse.jgit.api.errors.TransportException: Connection reset
at 
org.jenkinsci.plugins.gitclient.JGitAPIImpl$2.execute(JGitAPIImpl.java:639)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:764)
... 11 more
Caused by: org.eclipse.jgit.api.errors.TransportException: Connection reset
at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:139)
at 
org.jenkinsci.plugins.gitclient.JGitAPIImpl$2.execute(JGitAPIImpl.java:637)
... 12 more
Caused by: org.eclipse.jgit.errors.TransportException: Connection reset
at 
org.eclipse.jgit.transport.BasePackConnection.readAdvertisedRefs(BasePackConnection.java:182)
at 
org.eclipse.jgit.transport.TransportGitAnon$TcpFetchConnection.(TransportGitAnon.java:194)
at 
org.eclipse.jgit.transport.TransportGitAnon.openFetch(TransportGitAnon.java:120)
at 
org.eclipse.jgit.transport.FetchProcess.executeImp(FetchProcess.java:136)
at 
org.eclipse.jgit.transport.FetchProcess.execute(FetchProcess.java:122)
at org.eclipse.jgit.transport.Transport.fetch(Transport.java:1138)
at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:130)
... 13 more
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:196)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at org.eclipse.jgit.util.IO.readFully(IO.java:246)
at 
org.eclipse.jgit.transport.PacketLineIn.readLength(PacketLineIn.java:186)
at 
org.eclipse.jgit.transport.PacketLineIn.readString(PacketLineIn.java:138)
at 
org.eclipse.jgit.transport.BasePackConnection.readAdvertisedRefsImpl(BasePackConnection.java:195)
at 
org.eclipse.jgit.transport.BasePackConnection.readAdvertisedRefs(BasePackConnection.java:176)
... 19 more
ERROR: null
Retrying after 10 seconds
Fetching changes from the remote Git repository
Cleaning workspace
Checking out Revision 94c04237cce44cac1e40e1b8b6ee6a6addc001a5 
(refs/remotes/origin/master)
No emails were triggered.
[description-setter] Description set: Java: 32bit/jdk-9-ea+111-patched -server 
-XX:+UseG1GC -XX:-CompactStrings
[Lucene-Solr-master-Linux] $ 
/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/bin/ant 
"-Dargs=-server -XX:+UseG1GC -XX:-CompactStrings" jenkins-hourly
Buildfile: /home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml

jenkins-hourly:

-print-java-info:
[java-info] java version "9-ea"
[java-info] Java(TM) SE Runtime Environment (9-ea+111, Oracle Corporation)
[java-info] Java HotSpot(TM) Server VM (9-ea+111, Oracle Corporation)
[java-info] Test args: [-server -XX:+UseG1GC -XX:-CompactStrings]

clean:

clean:

clean:

ivy-availability-check:

ivy-fail:

ivy-configure:
[ivy:configure] :: Apache Ivy 2.3.0 - 20130110142753 :: 
http://ant.apache.org/ivy/ ::
[ivy:configure] :: loading settings :: file = 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/ivy-settings.xml

resolve-groovy:
[ivy:cachepath] :: resolving dependencies :: 
org.codehaus.groovy#groovy-all-caller;working
[ivy:cachepath] confs: [default]
[ivy:cachepath] found org.codehaus.groovy#groovy-all;2.4.6 in public
[ivy:cachepath] :: resolution report :: resolve 95ms :: artifacts dl 2ms
-
|  |modules||   artifacts   |
|   conf   | number| search|dwnlded|evicted|| number|dwnlded|

[jira] [Resolved] (SOLR-8903) Move SolrJ DateUtil to Extraction module as ExtractionDateUtil

2016-03-30 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-8903.

Resolution: Fixed

Thanks for the review Steve!  Maybe I should start generating patches from the 
command line.  FWIW looking at the patch I do see the leading {{solr/}} but I 
know many patches I've seen out there have some sort of nominal a/ or b/ or 
something like that in front.

> Move SolrJ DateUtil to Extraction module as ExtractionDateUtil
> --
>
> Key: SOLR-8903
> URL: https://issues.apache.org/jira/browse/SOLR-8903
> Project: Solr
>  Issue Type: Task
>  Components: SolrJ
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.0
>
> Attachments: SOLR_8903.patch, SOLR_8903_DateUtil_deprecate.patch
>
>
> SolrJ doesn't need a DateUtil class, particularly since we're on Java 8 and 
> can simply use {{new Date(Instant.parse(d).toEpochMilli());}} for parsing and 
> {{DateTimeFormatter.ISO_INSTANT.format(d.toInstant())}} for formatting.  Yes, 
> they are threadsafe.  I propose that we deprecate DateUtil from SolrJ, or 
> perhaps outright remove it from SolrJ for Solr 6.  The only SolrJ calls into 
> this class are to essentially use it to format or parse in the ISO standard 
> format.
> I also think we should move it to the "extraction" (SolrCell) module and name 
> it something like ExtractionDateUtil.  See, this class has a parse method 
> taking a list of formats, and there's a static list of them taken from 
> HttpClient's DateUtil.  DateUtil's original commit was SOLR-284 to be used by 
> SolrCell, and SolrCell wants this feature.  So I think it should move there.
> There are a few other uses:
> * Morphlines uses it, but morphlines depends on the extraction module so it 
> could just as well access it if we move it there.
> * The ValueAugmenterFactory (a doc transformer).  I really doubt whoever 
> added it realized that DateUtil.parseDate would try a bunch of formats 
> instead of only supporting the ISO canonical format.  So I think we should 
> just remove this reference.
> * DateFormatUtil.parseMathLenient falls back on this, and this method is in 
> turn called by just one caller -- DateValueSourceParser, registered as 
> {{ms}}.  I don't think we need leniency in use of this function query; values 
> given to ms should be computer generated in the ISO format.
> 
> edit: added ms().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8851) ClassCastException in SearchHandler

2016-03-30 Thread Marius Grama (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218651#comment-15218651
 ] 

Marius Grama commented on SOLR-8851:


Issue can be reproduced (quite often) on the _techproducts_  example collection 
with the following query :

http://localhost:8983/solr/techproducts/select?q=*:*=json=true=true={!tag=q1}manufacturedate_dt:[2006-01-01T00:00:00Z%20TO%20NOW]={!tag=q1}price:[0%20TO%20100]%20=1

> ClassCastException in SearchHandler
> ---
>
> Key: SOLR-8851
> URL: https://issues.apache.org/jira/browse/SOLR-8851
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Pascal Chollet
>
> When there is a query timeout in non-distrub mode, {{SearchHandler}} is 
> throwing a {{ClassCastException}}:
> {code}java.lang.ClassCastException: org.apache.solr.response.ResultContext 
> cannot be cast to org.apache.solr.common.SolrDocumentList
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>   ...{code}
> The problem can be reproduced if any component running after 
> {{QueryComponent}} times out - in our case it is {{FacetComponent}} which 
> throws a {{ExitingReaderException}}.
> {{SearchHandler:293}} expects a {{SolrDocumentList}} in {{rsp.response}}, but 
> {{QueryComponent}} did add a {{ResultContext}} instead.
> It looks like this is not a problem, if the {{QueryComponent}} itself is 
> timing out, as rsp.response is null in that case. It's only a problem if a 
> component after {{QueryComponent}} is timing out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8851) ClassCastException in SearchHandler

2016-03-30 Thread Marius Grama (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218651#comment-15218651
 ] 

Marius Grama edited comment on SOLR-8851 at 3/30/16 7:23 PM:
-

Issue can be reproduced (quite often) on the _techproducts_  example collection 
with the following query :

{noformat}
http://localhost:8983/solr/techproducts/select?q=*:*=json=true=true={!tag=q1}manufacturedate_dt:[2006-01-01T00:00:00Z%20TO%20NOW]={!tag=q1}price:[0%20TO%20100]%20=1
{noformat}


was (Author: mariusneo):
Issue can be reproduced (quite often) on the _techproducts_  example collection 
with the following query :

http://localhost:8983/solr/techproducts/select?q=*:*=json=true=true={!tag=q1}manufacturedate_dt:[2006-01-01T00:00:00Z%20TO%20NOW]={!tag=q1}price:[0%20TO%20100]%20=1

> ClassCastException in SearchHandler
> ---
>
> Key: SOLR-8851
> URL: https://issues.apache.org/jira/browse/SOLR-8851
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Pascal Chollet
>
> When there is a query timeout in non-distrub mode, {{SearchHandler}} is 
> throwing a {{ClassCastException}}:
> {code}java.lang.ClassCastException: org.apache.solr.response.ResultContext 
> cannot be cast to org.apache.solr.common.SolrDocumentList
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>   ...{code}
> The problem can be reproduced if any component running after 
> {{QueryComponent}} times out - in our case it is {{FacetComponent}} which 
> throws a {{ExitingReaderException}}.
> {{SearchHandler:293}} expects a {{SolrDocumentList}} in {{rsp.response}}, but 
> {{QueryComponent}} did add a {{ResultContext}} instead.
> It looks like this is not a problem, if the {{QueryComponent}} itself is 
> timing out, as rsp.response is null in that case. It's only a problem if a 
> component after {{QueryComponent}} is timing out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2774) broken cut/paste code for dealing with parsing/formatting dates

2016-03-30 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-2774.
--
Resolution: Invalid

SOLR-8903 removes the obsolete methods in DateUtil (now ExtractionDateUtil) so 
I think this issue is obsolete/invalid or just plain "fixed".

> broken cut/paste code for dealing with parsing/formatting dates
> ---
>
> Key: SOLR-2774
> URL: https://issues.apache.org/jira/browse/SOLR-2774
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
>
> DateUtils has methods cut/paste from DateField and TestResponseWriter which 
> are (in both cases) broken and since fixed in other issues.  that code either 
> needs removed or refactored so there is only a single (correct) copy of it.
> see parent issue for more details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8903) Move SolrJ DateUtil to Extraction module as ExtractionDateUtil

2016-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218630#comment-15218630
 ] 

ASF subversion and git services commented on SOLR-8903:
---

Commit 479f0e06343df51a8076dca386cc902f85c3c8fc in lucene-solr's branch 
refs/heads/branch_6_0 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=479f0e0 ]

SOLR-8903: Move SolrJ DateUtil to contrib/extraction as ExtractionDateUtil.
And removed obsolete methods.
(cherry picked from commit 44e0ac3)


> Move SolrJ DateUtil to Extraction module as ExtractionDateUtil
> --
>
> Key: SOLR-8903
> URL: https://issues.apache.org/jira/browse/SOLR-8903
> Project: Solr
>  Issue Type: Task
>  Components: SolrJ
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.0
>
> Attachments: SOLR_8903.patch, SOLR_8903_DateUtil_deprecate.patch
>
>
> SolrJ doesn't need a DateUtil class, particularly since we're on Java 8 and 
> can simply use {{new Date(Instant.parse(d).toEpochMilli());}} for parsing and 
> {{DateTimeFormatter.ISO_INSTANT.format(d.toInstant())}} for formatting.  Yes, 
> they are threadsafe.  I propose that we deprecate DateUtil from SolrJ, or 
> perhaps outright remove it from SolrJ for Solr 6.  The only SolrJ calls into 
> this class are to essentially use it to format or parse in the ISO standard 
> format.
> I also think we should move it to the "extraction" (SolrCell) module and name 
> it something like ExtractionDateUtil.  See, this class has a parse method 
> taking a list of formats, and there's a static list of them taken from 
> HttpClient's DateUtil.  DateUtil's original commit was SOLR-284 to be used by 
> SolrCell, and SolrCell wants this feature.  So I think it should move there.
> There are a few other uses:
> * Morphlines uses it, but morphlines depends on the extraction module so it 
> could just as well access it if we move it there.
> * The ValueAugmenterFactory (a doc transformer).  I really doubt whoever 
> added it realized that DateUtil.parseDate would try a bunch of formats 
> instead of only supporting the ISO canonical format.  So I think we should 
> just remove this reference.
> * DateFormatUtil.parseMathLenient falls back on this, and this method is in 
> turn called by just one caller -- DateValueSourceParser, registered as 
> {{ms}}.  I don't think we need leniency in use of this function query; values 
> given to ms should be computer generated in the ISO format.
> 
> edit: added ms().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8903) Move SolrJ DateUtil to Extraction module as ExtractionDateUtil

2016-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218626#comment-15218626
 ] 

ASF subversion and git services commented on SOLR-8903:
---

Commit 44e0ac38567e19465ebf74a160064b8a642ec6b6 in lucene-solr's branch 
refs/heads/branch_6x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=44e0ac3 ]

SOLR-8903: Move SolrJ DateUtil to contrib/extraction as ExtractionDateUtil.
And removed obsolete methods.
(cherry picked from commit 5e5fd66)


> Move SolrJ DateUtil to Extraction module as ExtractionDateUtil
> --
>
> Key: SOLR-8903
> URL: https://issues.apache.org/jira/browse/SOLR-8903
> Project: Solr
>  Issue Type: Task
>  Components: SolrJ
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.0
>
> Attachments: SOLR_8903.patch, SOLR_8903_DateUtil_deprecate.patch
>
>
> SolrJ doesn't need a DateUtil class, particularly since we're on Java 8 and 
> can simply use {{new Date(Instant.parse(d).toEpochMilli());}} for parsing and 
> {{DateTimeFormatter.ISO_INSTANT.format(d.toInstant())}} for formatting.  Yes, 
> they are threadsafe.  I propose that we deprecate DateUtil from SolrJ, or 
> perhaps outright remove it from SolrJ for Solr 6.  The only SolrJ calls into 
> this class are to essentially use it to format or parse in the ISO standard 
> format.
> I also think we should move it to the "extraction" (SolrCell) module and name 
> it something like ExtractionDateUtil.  See, this class has a parse method 
> taking a list of formats, and there's a static list of them taken from 
> HttpClient's DateUtil.  DateUtil's original commit was SOLR-284 to be used by 
> SolrCell, and SolrCell wants this feature.  So I think it should move there.
> There are a few other uses:
> * Morphlines uses it, but morphlines depends on the extraction module so it 
> could just as well access it if we move it there.
> * The ValueAugmenterFactory (a doc transformer).  I really doubt whoever 
> added it realized that DateUtil.parseDate would try a bunch of formats 
> instead of only supporting the ISO canonical format.  So I think we should 
> just remove this reference.
> * DateFormatUtil.parseMathLenient falls back on this, and this method is in 
> turn called by just one caller -- DateValueSourceParser, registered as 
> {{ms}}.  I don't think we need leniency in use of this function query; values 
> given to ms should be computer generated in the ISO format.
> 
> edit: added ms().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2773) DateField parsing/formatting issues of years prior to 0001

2016-03-30 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-2773.
--
   Resolution: Fixed
Fix Version/s: 6.0

Closing; fixed by SOLR-8904 (using Java 8 time APIs).  There are tests now for 
these dates.

> DateField parsing/formatting issues of years prior to 0001
> --
>
> Key: SOLR-2773
> URL: https://issues.apache.org/jira/browse/SOLR-2773
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
>Assignee: David Smiley
> Fix For: 6.0
>
>
> there are currently issues with parsing/formatting dates prior to "Year 1".  
> these issues also extend to the fact that the xmlschema spec for "canonical" 
> dateTime values seems to actually be contradictory as to how to interpret 
> negative years, and wether there is a "year 0"
> see parent issue (SOLR-1899) for more details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1899) dates prior to 1000AD are not formatted properly in responses

2016-03-30 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-1899.
--
   Resolution: Fixed
Fix Version/s: 6.0

Closing; fixed by SOLR-8904 (using Java 8 time APIs).  There are tests now for 
these dates.

> dates prior to 1000AD are not formatted properly in responses
> -
>
> Key: SOLR-1899
> URL: https://issues.apache.org/jira/browse/SOLR-1899
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 1.1.0, 1.2, 1.3, 1.4
>Reporter: Hoss Man
>Assignee: David Smiley
> Fix For: 6.0
>
> Attachments: SOLR-1899.patch
>
>
> As noted on the mailing list, if a document is added to solr with a date 
> field such as "0001-01-01T00:00:00Z" then when that document is returned by a 
> search the year will be improperly formated as "1-01-01T00:00:00Z"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8904) Switch from SimpleDateFormat to Java 8 DateTimeFormatter.ISO_INSTANT

2016-03-30 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-8904.

Resolution: Fixed

> Switch from SimpleDateFormat to Java 8 DateTimeFormatter.ISO_INSTANT
> 
>
> Key: SOLR-8904
> URL: https://issues.apache.org/jira/browse/SOLR-8904
> Project: Solr
>  Issue Type: Task
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.0
>
> Attachments: SOLR_8904.patch, SOLR_8904.patch, 
> SOLR_8904_switch_from_SimpleDateFormat_to_Instant_parse_and_format.patch
>
>
> I'd like to move Solr away from SimpleDateFormat to Java 8's 
> java.time.formatter.DateTimeFormatter API, particularly using simply 
> ISO_INSTANT without any custom rules.  This especially involves our 
> DateFormatUtil class in Solr core, but also involves DateUtil (I filed 
> SOLR-8903 to deal with additional delete/move/deprecations for that one).
> In particular, there's {{new Date(Instant.parse(d).toEpochMilli())}} for 
> parsing and {{DateTimeFormatter.ISO_INSTANT.format(d.toInstant())}} for 
> formatting.  Simple & thread-safe!
> I want to simply cut over completely without having special custom rules.  
> There are differences in how ISO_INSTANT does things:
> * Formatting: Milliseconds are 0 padded to 3 digits if the milliseconds is 
> non-zero.  Thus 30 milliseconds will have ".030" added on.  Our current 
> formatting code emits ".03".
> * Dates with years after '' (i.e. 1 and beyond, >= 5 digit years):  
> ISO_INSTANT strictly demands a leading '\+' -- it is formatted with a "\+" 
> and if such a year is parsed it *must* have a "\+" or there is an exception.  
> SimpleDateFormatter requires the opposite -- no '+' and and if you tried to 
> give it one, it would throw an exception.  
> * Currently we don't support negative years (resulting in invisible errors 
> mostly!).  ISO_INSTANT supports this!
> In addition, DateFormatUtil.parseDate currently allows the trailing 'Z' to be 
> optional, but the only caller that could exploit this is the analytics 
> module.  I'd like to remove the optional-ness of 'Z' and inline this method 
> away to {{new Date(Instant.parse(d).toEpochMilli())}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8904) Switch from SimpleDateFormat to Java 8 DateTimeFormatter.ISO_INSTANT

2016-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218575#comment-15218575
 ] 

ASF subversion and git services commented on SOLR-8904:
---

Commit a47a0baa829ee18b30cc391a229b96d85c865ea0 in lucene-solr's branch 
refs/heads/branch_6_0 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a47a0ba ]

SOLR-8904: switch from SimpleDateFormat to Instant.parse and format.
[value] and ms() and contrib/analytics now call DateMathParser to parse.  
DateFormatUtil is now removed.
(cherry picked from commit 72f5eac)


> Switch from SimpleDateFormat to Java 8 DateTimeFormatter.ISO_INSTANT
> 
>
> Key: SOLR-8904
> URL: https://issues.apache.org/jira/browse/SOLR-8904
> Project: Solr
>  Issue Type: Task
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.0
>
> Attachments: SOLR_8904.patch, SOLR_8904.patch, 
> SOLR_8904_switch_from_SimpleDateFormat_to_Instant_parse_and_format.patch
>
>
> I'd like to move Solr away from SimpleDateFormat to Java 8's 
> java.time.formatter.DateTimeFormatter API, particularly using simply 
> ISO_INSTANT without any custom rules.  This especially involves our 
> DateFormatUtil class in Solr core, but also involves DateUtil (I filed 
> SOLR-8903 to deal with additional delete/move/deprecations for that one).
> In particular, there's {{new Date(Instant.parse(d).toEpochMilli())}} for 
> parsing and {{DateTimeFormatter.ISO_INSTANT.format(d.toInstant())}} for 
> formatting.  Simple & thread-safe!
> I want to simply cut over completely without having special custom rules.  
> There are differences in how ISO_INSTANT does things:
> * Formatting: Milliseconds are 0 padded to 3 digits if the milliseconds is 
> non-zero.  Thus 30 milliseconds will have ".030" added on.  Our current 
> formatting code emits ".03".
> * Dates with years after '' (i.e. 1 and beyond, >= 5 digit years):  
> ISO_INSTANT strictly demands a leading '\+' -- it is formatted with a "\+" 
> and if such a year is parsed it *must* have a "\+" or there is an exception.  
> SimpleDateFormatter requires the opposite -- no '+' and and if you tried to 
> give it one, it would throw an exception.  
> * Currently we don't support negative years (resulting in invisible errors 
> mostly!).  ISO_INSTANT supports this!
> In addition, DateFormatUtil.parseDate currently allows the trailing 'Z' to be 
> optional, but the only caller that could exploit this is the analytics 
> module.  I'd like to remove the optional-ness of 'Z' and inline this method 
> away to {{new Date(Instant.parse(d).toEpochMilli())}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8921) Potential NPE in pivot facet

2016-03-30 Thread Steve Molloy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Molloy updated SOLR-8921:
---
Attachment: SOLR-8921.patch

Simplistic patch handling null queries without trying to guess why they're 
null. Seems to work but there might be better solutions.

> Potential NPE in pivot facet
> 
>
> Key: SOLR-8921
> URL: https://issues.apache.org/jira/browse/SOLR-8921
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Steve Molloy
> Attachments: SOLR-8921.patch
>
>
> For some queries distributed over multiple collections, I've hit a NPE when 
> SolrIndexSearcher tries to fetch results from cache. Basically, query 
> generated to compute pivot on document sub set is null, causing the NPE on 
> lookup.
> 2016-03-28 11:34:58.361 ERROR (qtp268141378-751) [c:otif_fr s:shard1 
> r:core_node1 x:otif_fr_shard1_replica1] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.solr.util.ConcurrentLFUCache.get(ConcurrentLFUCache.java:92)
>   at org.apache.solr.search.LFUCache.get(LFUCache.java:153)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:940)
>   at 
> org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:2098)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.getSubsetSize(PivotFacetProcessor.java:356)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:219)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:167)
>   at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:263)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8921) Potential NPE in pivot facet

2016-03-30 Thread Steve Molloy (JIRA)
Steve Molloy created SOLR-8921:
--

 Summary: Potential NPE in pivot facet
 Key: SOLR-8921
 URL: https://issues.apache.org/jira/browse/SOLR-8921
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.4.1
Reporter: Steve Molloy


For some queries distributed over multiple collections, I've hit a NPE when 
SolrIndexSearcher tries to fetch results from cache. Basically, query generated 
to compute pivot on document sub set is null, causing the NPE on lookup.

2016-03-28 11:34:58.361 ERROR (qtp268141378-751) [c:otif_fr s:shard1 
r:core_node1 x:otif_fr_shard1_replica1] o.a.s.h.RequestHandlerBase 
java.lang.NullPointerException
at 
java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
at 
org.apache.solr.util.ConcurrentLFUCache.get(ConcurrentLFUCache.java:92)
at org.apache.solr.search.LFUCache.get(LFUCache.java:153)
at 
org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:940)
at 
org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:2098)
at 
org.apache.solr.handler.component.PivotFacetProcessor.getSubsetSize(PivotFacetProcessor.java:356)
at 
org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:219)
at 
org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:167)
at 
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:263)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8904) Switch from SimpleDateFormat to Java 8 DateTimeFormatter.ISO_INSTANT

2016-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218565#comment-15218565
 ] 

ASF subversion and git services commented on SOLR-8904:
---

Commit 72f5eac2c5e7fb743f166fb3c1b25e73078ebdbe in lucene-solr's branch 
refs/heads/branch_6x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=72f5eac ]

SOLR-8904: switch from SimpleDateFormat to Instant.parse and format.
[value] and ms() and contrib/analytics now call DateMathParser to parse.  
DateFormatUtil is now removed.
(cherry picked from commit 94c0423) (cherry picked from commit 39932f5)


> Switch from SimpleDateFormat to Java 8 DateTimeFormatter.ISO_INSTANT
> 
>
> Key: SOLR-8904
> URL: https://issues.apache.org/jira/browse/SOLR-8904
> Project: Solr
>  Issue Type: Task
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.0
>
> Attachments: SOLR_8904.patch, SOLR_8904.patch, 
> SOLR_8904_switch_from_SimpleDateFormat_to_Instant_parse_and_format.patch
>
>
> I'd like to move Solr away from SimpleDateFormat to Java 8's 
> java.time.formatter.DateTimeFormatter API, particularly using simply 
> ISO_INSTANT without any custom rules.  This especially involves our 
> DateFormatUtil class in Solr core, but also involves DateUtil (I filed 
> SOLR-8903 to deal with additional delete/move/deprecations for that one).
> In particular, there's {{new Date(Instant.parse(d).toEpochMilli())}} for 
> parsing and {{DateTimeFormatter.ISO_INSTANT.format(d.toInstant())}} for 
> formatting.  Simple & thread-safe!
> I want to simply cut over completely without having special custom rules.  
> There are differences in how ISO_INSTANT does things:
> * Formatting: Milliseconds are 0 padded to 3 digits if the milliseconds is 
> non-zero.  Thus 30 milliseconds will have ".030" added on.  Our current 
> formatting code emits ".03".
> * Dates with years after '' (i.e. 1 and beyond, >= 5 digit years):  
> ISO_INSTANT strictly demands a leading '\+' -- it is formatted with a "\+" 
> and if such a year is parsed it *must* have a "\+" or there is an exception.  
> SimpleDateFormatter requires the opposite -- no '+' and and if you tried to 
> give it one, it would throw an exception.  
> * Currently we don't support negative years (resulting in invisible errors 
> mostly!).  ISO_INSTANT supports this!
> In addition, DateFormatUtil.parseDate currently allows the trailing 'Z' to be 
> optional, but the only caller that could exploit this is the analytics 
> module.  I'd like to remove the optional-ness of 'Z' and inline this method 
> away to {{new Date(Instant.parse(d).toEpochMilli())}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5944) Support updates of numeric DocValues

2016-03-30 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5944:
-
Attachment: DUP.patch

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5944) Support updates of numeric DocValues

2016-03-30 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5944:
-
Attachment: (was: DUP.patch)

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8914) ZkStateReader's refreshLiveNodes(Watcher) is not thread safe

2016-03-30 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218529#comment-15218529
 ] 

Scott Blum commented on SOLR-8914:
--

Correct, the refreshCollectionListLock solves the same class of bug for the 
collections list as we needed to solve for live_nodes.  However, the 
implementation is simpler since we don't need to hold getUpdateLock() and 
therefore there's no deadlock potential.

The code to check this.legacyClusterStateVersion >= stat.getVersion() solves 
that class of bug for clusterstate.json; we already have version guarding code 
for stateformat 2 collections, it was an oversight not to have it for this also.

Net-net: my intent (hope?) is this patch should fix this category of race 
condition for all the major pieces of ZkStateReader.

> ZkStateReader's refreshLiveNodes(Watcher) is not thread safe
> 
>
> Key: SOLR-8914
> URL: https://issues.apache.org/jira/browse/SOLR-8914
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8914.patch, SOLR-8914.patch, SOLR-8914.patch, 
> SOLR-8914.patch, jenkins.thetaphi.de_Lucene-Solr-6.x-Solaris_32.log.txt, 
> live_node_mentions_port56361_with_threadIds.log.txt, 
> live_nodes_mentions.log.txt
>
>
> Jenkin's encountered a failure in TestTolerantUpdateProcessorCloud over the 
> weekend
> {noformat}
> http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/32/consoleText
> Checking out Revision c46d7686643e7503304cb35dfe546bce9c6684e7 
> (refs/remotes/origin/branch_6x)
> Using Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC
> {noformat}
> The failure happened during the static setup of the test, when a 
> MiniSolrCloudCluster & several clients are initialized -- before any code 
> related to TolerantUpdateProcessor is ever used.
> I can't reproduce this, or really make sense of what i'm (not) seeing here in 
> the logs, so i'm filing this jira with my analysis in the hopes that someone 
> else can help make sense of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8914) ZkStateReader's refreshLiveNodes(Watcher) is not thread safe

2016-03-30 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218518#comment-15218518
 ] 

Scott Blum commented on SOLR-8914:
--

That was actually my first formulation, but that causes deadlock, see previous 
comments.

> ZkStateReader's refreshLiveNodes(Watcher) is not thread safe
> 
>
> Key: SOLR-8914
> URL: https://issues.apache.org/jira/browse/SOLR-8914
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8914.patch, SOLR-8914.patch, SOLR-8914.patch, 
> SOLR-8914.patch, jenkins.thetaphi.de_Lucene-Solr-6.x-Solaris_32.log.txt, 
> live_node_mentions_port56361_with_threadIds.log.txt, 
> live_nodes_mentions.log.txt
>
>
> Jenkin's encountered a failure in TestTolerantUpdateProcessorCloud over the 
> weekend
> {noformat}
> http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/32/consoleText
> Checking out Revision c46d7686643e7503304cb35dfe546bce9c6684e7 
> (refs/remotes/origin/branch_6x)
> Using Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC
> {noformat}
> The failure happened during the static setup of the test, when a 
> MiniSolrCloudCluster & several clients are initialized -- before any code 
> related to TolerantUpdateProcessor is ever used.
> I can't reproduce this, or really make sense of what i'm (not) seeing here in 
> the logs, so i'm filing this jira with my analysis in the hopes that someone 
> else can help make sense of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7362) TestReqParamsAPI failing in jenkins

2016-03-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218512#comment-15218512
 ] 

David Smiley commented on SOLR-7362:


This fails for me reproducibly, on the 6x branch (if that matters?):  
{{-Dtests.seed=D204858DC237526B}}

> TestReqParamsAPI failing in jenkins
> ---
>
> Key: SOLR-7362
> URL: https://issues.apache.org/jira/browse/SOLR-7362
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> {noformat}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4645/
> Java: 32bit/jdk1.8.0_40 -server -XX:+UseSerialGC
> 1 tests failed.
> FAILED:  org.apache.solr.handler.TestReqParamsAPI.test
> Error Message:
> Could not get expected value  'null' for path 'response/params/y/p' full 
> output: {   "responseHeader":{ "status":0, "QTime":1},   "response":{ 
> "znodeVersion":3, "params":{   "x":{ "a":"A val", 
> "b":"B val", "":{"v":0}},   "y":{ "p":"P val", 
> "q":"Q val", "":{"v":0}
> Stack Trace:
> java.lang.AssertionError: Could not get expected value  'null' for path 
> 'response/params/y/p' full output: {
>   "responseHeader":{
> "status":0,
> "QTime":1},
>   "response":{
> "znodeVersion":3,
> "params":{
>   "x":{
> "a":"A val",
> "b":"B val",
> "":{"v":0}},
>   "y":{
> "p":"P val",
> "q":"Q val",
> "":{"v":0}
> at 
> __randomizedtesting.SeedInfo.seed([D0DB18ECE165C505:588F27364F99A8FD]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at 
> org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:405)
> at 
> org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:236)
> at 
> org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:71)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7150) geo3d public APIs should match the 2D apis?

2016-03-30 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218503#comment-15218503
 ] 

Karl Wright edited comment on LUCENE-7150 at 3/30/16 6:21 PM:
--

[~mikemccand] Thanks!

FWIW, polygons do not have the same issues with approximation in geo3d as 
circles.  They are in fact exact.  Are the 2D implementations doing real 
great-circle polygons, or a 2D linear interpolation?



was (Author: kwri...@metacarta.com):
[~mikemccand] Thanks!

FWIW, polygons do not have the same issues with approximation in geo3d as 
circles.  They are in fact exact.  Are the 2D implementations doing real 
polygons, or a 2D linear interpolation?


> geo3d public APIs should match the 2D apis?
> ---
>
> Key: LUCENE-7150
> URL: https://issues.apache.org/jira/browse/LUCENE-7150
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7150-sphere.patch, LUCENE-7150.patch, 
> LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch
>
>
> I'm struggling to benchmark the equivalent to 
> {{LatLonPoint.newDistanceQuery}} in the geo3d world.
> Ideally, I think we'd have a {{Geo3DPoint.newDistanceQuery}}?  And it would 
> take degrees, not radians, and radiusMeters, not an angle?
> And if I index and search using {{PlanetModel.SPHERE}} I think it should 
> ideally give the same results as {{LatLonPoint.newDistanceQuery}}, which uses 
> haversin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8725) Cores, collections, and shards should accept hyphens in identifier name

2016-03-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-8725:
-
Priority: Major  (was: Blocker)

OK, Anshum is correct, this fix is in 6.0 so this shouldn't be a blocker, just 
have to decide what to do about 5.5.1 etc.

And my comment about re-wording things was on the theory that the patch hadn't 
been committed yet, feel free to ignore.

> Cores, collections, and shards should accept hyphens in identifier name
> ---
>
> Key: SOLR-8725
> URL: https://issues.apache.org/jira/browse/SOLR-8725
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Chris Beer
>Assignee: Anshum Gupta
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8725-5_5.patch, SOLR-8725-fix-regex.patch, 
> SOLR-8725.patch, SOLR-8725.patch
>
>
> In SOLR-8642, hyphens are no longer considered valid identifiers for cores 
> (and collections?). Our solr instance was successfully using hyphens in our 
> core names, and our affected cores now error with:
> marc-profiler_shard1_replica1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Invalid name: 'marc-profiler_shard1_replica1' Identifiers must consist 
> entirely of periods, underscores and alphanumerics
> Before starting to rename all of our collections, I wonder if this decision 
> could be revisited to be backwards compatible with previously created 
> collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7150) geo3d public APIs should match the 2D apis?

2016-03-30 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218503#comment-15218503
 ] 

Karl Wright commented on LUCENE-7150:
-

[~mikemccand] Thanks!

FWIW, polygons do not have the same issues with approximation in geo3d as 
circles.  They are in fact exact.  Are the 2D implementations doing real 
polygons, or a 2D linear interpolation?


> geo3d public APIs should match the 2D apis?
> ---
>
> Key: LUCENE-7150
> URL: https://issues.apache.org/jira/browse/LUCENE-7150
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7150-sphere.patch, LUCENE-7150.patch, 
> LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch
>
>
> I'm struggling to benchmark the equivalent to 
> {{LatLonPoint.newDistanceQuery}} in the geo3d world.
> Ideally, I think we'd have a {{Geo3DPoint.newDistanceQuery}}?  And it would 
> take degrees, not radians, and radiusMeters, not an angle?
> And if I index and search using {{PlanetModel.SPHERE}} I think it should 
> ideally give the same results as {{LatLonPoint.newDistanceQuery}}, which uses 
> haversin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8914) ZkStateReader's refreshLiveNodes(Watcher) is not thread safe

2016-03-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8914:
---
Attachment: SOLR-8914.patch

i've updated the patch to cleanup the test a bit -- besdies some cosmetic stuff 
it now does more iterations of smaller "bursts" with more variability in the 
number of threads used in each burst (which should increase the odds of it 
failing, eventually, on diff machines regardless of CPU count.

bq. I'm beasting your latest patch too, I'll report anything that comes up. 
Just to make sure, I should be beasting StressTestLiveNodes, right?

TestStressLiveNodes, but otherwise yes.

It would also be helpful to know if (and how quickly) you can get 
TestStressLiveNodes to fail on your machine when beasting w/o the rest of the 
patch (so far i'm the only one that's been able to confirm the bug in practice 
w/o Scott's patch - hopefully these changes increase those odds)

> ZkStateReader's refreshLiveNodes(Watcher) is not thread safe
> 
>
> Key: SOLR-8914
> URL: https://issues.apache.org/jira/browse/SOLR-8914
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8914.patch, SOLR-8914.patch, SOLR-8914.patch, 
> SOLR-8914.patch, jenkins.thetaphi.de_Lucene-Solr-6.x-Solaris_32.log.txt, 
> live_node_mentions_port56361_with_threadIds.log.txt, 
> live_nodes_mentions.log.txt
>
>
> Jenkin's encountered a failure in TestTolerantUpdateProcessorCloud over the 
> weekend
> {noformat}
> http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/32/consoleText
> Checking out Revision c46d7686643e7503304cb35dfe546bce9c6684e7 
> (refs/remotes/origin/branch_6x)
> Using Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC
> {noformat}
> The failure happened during the static setup of the test, when a 
> MiniSolrCloudCluster & several clients are initialized -- before any code 
> related to TolerantUpdateProcessor is ever used.
> I can't reproduce this, or really make sense of what i'm (not) seeing here in 
> the logs, so i'm filing this jira with my analysis in the hopes that someone 
> else can help make sense of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7150) geo3d public APIs should match the 2D apis?

2016-03-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218469#comment-15218469
 ] 

Michael McCandless commented on LUCENE-7150:


Here're the results for querying 5-gons:

  - geo3d (WGS84): 30.3 QPS, 282,041,551 hits
  - points: 33.5 QPS, 281,983,277 hits
  - geopoint: 21.0 QPS, 281,983,217 hits

> geo3d public APIs should match the 2D apis?
> ---
>
> Key: LUCENE-7150
> URL: https://issues.apache.org/jira/browse/LUCENE-7150
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7150-sphere.patch, LUCENE-7150.patch, 
> LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch
>
>
> I'm struggling to benchmark the equivalent to 
> {{LatLonPoint.newDistanceQuery}} in the geo3d world.
> Ideally, I think we'd have a {{Geo3DPoint.newDistanceQuery}}?  And it would 
> take degrees, not radians, and radiusMeters, not an angle?
> And if I index and search using {{PlanetModel.SPHERE}} I think it should 
> ideally give the same results as {{LatLonPoint.newDistanceQuery}}, which uses 
> haversin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Access Request for Lucene and Solr Wiki

2016-03-30 Thread Nicholas Knize
Thanks Shawn!

On Wed, Mar 30, 2016 at 10:02 AM, Shawn Heisey  wrote:

> On 3/30/2016 8:46 AM, Nicholas Knize wrote:
> > In preparation for the 6.0 release I started (attempted) to create the
> > WIP release notes. Looks like those powers have not yet been granted.
> >
> > Lucene Wiki (https://wiki.apache.org/lucene-java)
> > Solr Wiki (https://wiki.apache.org/solr)
> > * add to ContributorsGroup
> > * Username: NickKnize
>
> I've actually added you here:
>
> https://wiki.apache.org/solr/AdminGroup
> https://wiki.apache.org/lucene-java/AdminGroup
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-8725) Cores, collections, and shards should accept hyphens in identifier name

2016-03-30 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218457#comment-15218457
 ] 

Anshum Gupta commented on SOLR-8725:


but a working version of this is in 6.0, so I don't think we need this to be a 
blocker. I made this a blocker thinking on similar lines but then decided 
against it.

It would be nice to add that information about hyphen not being allowed as the 
first character to the getIdentifiesMessage method though.

> Cores, collections, and shards should accept hyphens in identifier name
> ---
>
> Key: SOLR-8725
> URL: https://issues.apache.org/jira/browse/SOLR-8725
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.5
>Reporter: Chris Beer
>Assignee: Anshum Gupta
>Priority: Blocker
> Fix For: master, 6.0, 5.5.1
>
> Attachments: SOLR-8725-5_5.patch, SOLR-8725-fix-regex.patch, 
> SOLR-8725.patch, SOLR-8725.patch
>
>
> In SOLR-8642, hyphens are no longer considered valid identifiers for cores 
> (and collections?). Our solr instance was successfully using hyphens in our 
> core names, and our affected cores now error with:
> marc-profiler_shard1_replica1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Invalid name: 'marc-profiler_shard1_replica1' Identifiers must consist 
> entirely of periods, underscores and alphanumerics
> Before starting to rename all of our collections, I wonder if this decision 
> could be revisited to be backwards compatible with previously created 
> collections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7150) geo3d public APIs should match the 2D apis?

2016-03-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218450#comment-15218450
 ] 

Michael McCandless commented on LUCENE-7150:


OK {{-poly 5 -geo3d}} now works!  Thanks.

> geo3d public APIs should match the 2D apis?
> ---
>
> Key: LUCENE-7150
> URL: https://issues.apache.org/jira/browse/LUCENE-7150
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7150-sphere.patch, LUCENE-7150.patch, 
> LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch
>
>
> I'm struggling to benchmark the equivalent to 
> {{LatLonPoint.newDistanceQuery}} in the geo3d world.
> Ideally, I think we'd have a {{Geo3DPoint.newDistanceQuery}}?  And it would 
> take degrees, not radians, and radiusMeters, not an angle?
> And if I index and search using {{PlanetModel.SPHERE}} I think it should 
> ideally give the same results as {{LatLonPoint.newDistanceQuery}}, which uses 
> haversin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7094) spatial-extras BBoxStrategy and (confusingly!) PointVectorStrategy use legacy numeric encoding

2016-03-30 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218449#comment-15218449
 ] 

Nicholas Knize commented on LUCENE-7094:


+1

> spatial-extras BBoxStrategy and (confusingly!) PointVectorStrategy use legacy 
> numeric encoding
> --
>
> Key: LUCENE-7094
> URL: https://issues.apache.org/jira/browse/LUCENE-7094
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Nicholas Knize
>Priority: Blocker
> Fix For: master, 6.0
>
> Attachments: LUCENE-7094.patch, LUCENE-7094.patch, LUCENE_7094.patch, 
> LUCENE_7094.patch, LUCENE_7094.patch
>
>
> We need to deprecate these since they work on the old encoding and provide 
> points based alternatives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7152) Refactor lucene-spatial GeoUtils to core

2016-03-30 Thread Nicholas Knize (JIRA)
Nicholas Knize created LUCENE-7152:
--

 Summary: Refactor lucene-spatial GeoUtils to core
 Key: LUCENE-7152
 URL: https://issues.apache.org/jira/browse/LUCENE-7152
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Nicholas Knize


{{GeoUtils}} contains a lot of common spatial mathematics that can be reused 
across multiple packages. As discussed in LUCENE-7150 this issue will refactor 
GeoUtils to a new {{o.a.l.util.geo}} package in core that can be the home for 
other reusable spatial utility classes required by field and query 
implementations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8914) ZkStateReader's refreshLiveNodes(Watcher) is not thread safe

2016-03-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218423#comment-15218423
 ] 

Erick Erickson commented on SOLR-8914:
--

I'm beasting your latest patch too, I'll report anything that comes up. Just to 
make sure, I should be beasting StressTestLiveNodes, right?

> ZkStateReader's refreshLiveNodes(Watcher) is not thread safe
> 
>
> Key: SOLR-8914
> URL: https://issues.apache.org/jira/browse/SOLR-8914
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: SOLR-8914.patch, SOLR-8914.patch, SOLR-8914.patch, 
> jenkins.thetaphi.de_Lucene-Solr-6.x-Solaris_32.log.txt, 
> live_node_mentions_port56361_with_threadIds.log.txt, 
> live_nodes_mentions.log.txt
>
>
> Jenkin's encountered a failure in TestTolerantUpdateProcessorCloud over the 
> weekend
> {noformat}
> http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/32/consoleText
> Checking out Revision c46d7686643e7503304cb35dfe546bce9c6684e7 
> (refs/remotes/origin/branch_6x)
> Using Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC
> {noformat}
> The failure happened during the static setup of the test, when a 
> MiniSolrCloudCluster & several clients are initialized -- before any code 
> related to TolerantUpdateProcessor is ever used.
> I can't reproduce this, or really make sense of what i'm (not) seeing here in 
> the logs, so i'm filing this jira with my analysis in the hopes that someone 
> else can help make sense of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-03-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218422#comment-15218422
 ] 

Hoss Man commented on LUCENE-6938:
--

that sounds correct ... on the 5.5 branch LUCENE_5_5_1 should _not_ be 
deprecated ... but on all subsequent branches (6x, 6_1, master) it should be.

> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master, 5.x
>
> Attachments: LUCENE-6938-1.patch, LUCENE-6938-wc-checker.patch, 
> LUCENE-6938-wc-checker.patch, LUCENE-6938.patch, LUCENE-6938.patch, 
> LUCENE-6938.patch, LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8904) Switch from SimpleDateFormat to Java 8 DateTimeFormatter.ISO_INSTANT

2016-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218424#comment-15218424
 ] 

ASF subversion and git services commented on SOLR-8904:
---

Commit 94c04237cce44cac1e40e1b8b6ee6a6addc001a5 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=94c0423 ]

SOLR-8904: switch from SimpleDateFormat to Instant.parse and format.
[value] and ms() and contrib/analytics now call DateMathParser to parse.  
DateFormatUtil is now removed.


> Switch from SimpleDateFormat to Java 8 DateTimeFormatter.ISO_INSTANT
> 
>
> Key: SOLR-8904
> URL: https://issues.apache.org/jira/browse/SOLR-8904
> Project: Solr
>  Issue Type: Task
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.0
>
> Attachments: SOLR_8904.patch, SOLR_8904.patch, 
> SOLR_8904_switch_from_SimpleDateFormat_to_Instant_parse_and_format.patch
>
>
> I'd like to move Solr away from SimpleDateFormat to Java 8's 
> java.time.formatter.DateTimeFormatter API, particularly using simply 
> ISO_INSTANT without any custom rules.  This especially involves our 
> DateFormatUtil class in Solr core, but also involves DateUtil (I filed 
> SOLR-8903 to deal with additional delete/move/deprecations for that one).
> In particular, there's {{new Date(Instant.parse(d).toEpochMilli())}} for 
> parsing and {{DateTimeFormatter.ISO_INSTANT.format(d.toInstant())}} for 
> formatting.  Simple & thread-safe!
> I want to simply cut over completely without having special custom rules.  
> There are differences in how ISO_INSTANT does things:
> * Formatting: Milliseconds are 0 padded to 3 digits if the milliseconds is 
> non-zero.  Thus 30 milliseconds will have ".030" added on.  Our current 
> formatting code emits ".03".
> * Dates with years after '' (i.e. 1 and beyond, >= 5 digit years):  
> ISO_INSTANT strictly demands a leading '\+' -- it is formatted with a "\+" 
> and if such a year is parsed it *must* have a "\+" or there is an exception.  
> SimpleDateFormatter requires the opposite -- no '+' and and if you tried to 
> give it one, it would throw an exception.  
> * Currently we don't support negative years (resulting in invisible errors 
> mostly!).  ISO_INSTANT supports this!
> In addition, DateFormatUtil.parseDate currently allows the trailing 'Z' to be 
> optional, but the only caller that could exploit this is the analytics 
> module.  I'd like to remove the optional-ness of 'Z' and inline this method 
> away to {{new Date(Instant.parse(d).toEpochMilli())}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8888) Add shortestPath Streaming Expression

2016-03-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-:
-
Attachment: SOLR-.patch

New patch which returns all of the shortest paths. Manual tests look good but 
more unit tests are needed with this algorithm.

> Add shortestPath Streaming Expression
> -
>
> Key: SOLR-
> URL: https://issues.apache.org/jira/browse/SOLR-
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Attachments: SOLR-.patch, SOLR-.patch, SOLR-.patch, 
> SOLR-.patch, SOLR-.patch, SOLR-.patch, SOLR-.patch
>
>
> This ticket is to implement a distributed shortest path graph traversal as a 
> Streaming Expression.
> Expression syntax:
> {code}
> shortestPath(collection, 
>  from="j...@company.com", 
>  to="j...@company.com",
>  edge="from=to",
>  threads="6",
>  partitionSize="300", 
>  fq="limiting query", 
>  maxDepth="4")
> {code}
> The expression above performs a *breadth first search* to find the shortest 
> path in an unweighted, directed graph. The search starts from the node 
> j...@company.com  and searches for the node j...@company.com, traversing the 
> *edges* by iteratively joining the *from* and *to* columns. Each level in the 
> traversal is implemented as a *parallel partitioned* nested loop join across 
> the entire *collection*. The *threads* parameter controls the number of 
> threads performing the join at each level. The *partitionSize* controls the 
> of number of nodes in each join partition. *maxDepth* controls the number of 
> levels to traverse. *fq* is a limiting query applied to each level in the 
> traversal.
> Future implementations can add more capabilities such as weighted traversals.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7150) geo3d public APIs should match the 2D apis?

2016-03-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218388#comment-15218388
 ] 

Michael McCandless commented on LUCENE-7150:


Thanks [~daddywri], I'll test!

> geo3d public APIs should match the 2D apis?
> ---
>
> Key: LUCENE-7150
> URL: https://issues.apache.org/jira/browse/LUCENE-7150
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7150-sphere.patch, LUCENE-7150.patch, 
> LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch
>
>
> I'm struggling to benchmark the equivalent to 
> {{LatLonPoint.newDistanceQuery}} in the geo3d world.
> Ideally, I think we'd have a {{Geo3DPoint.newDistanceQuery}}?  And it would 
> take degrees, not radians, and radiusMeters, not an angle?
> And if I index and search using {{PlanetModel.SPHERE}} I think it should 
> ideally give the same results as {{LatLonPoint.newDistanceQuery}}, which uses 
> haversin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7150) geo3d public APIs should match the 2D apis?

2016-03-30 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218360#comment-15218360
 ] 

Karl Wright edited comment on LUCENE-7150 at 3/30/16 5:23 PM:
--

Use the same convention as for 2D for polygons of repeating one point.  
[~mikemccand], this should get you unstuck.


was (Author: kwri...@metacarta.com):
Use the same convention as for 2D for polygons of repeating one point.

> geo3d public APIs should match the 2D apis?
> ---
>
> Key: LUCENE-7150
> URL: https://issues.apache.org/jira/browse/LUCENE-7150
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7150-sphere.patch, LUCENE-7150.patch, 
> LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch
>
>
> I'm struggling to benchmark the equivalent to 
> {{LatLonPoint.newDistanceQuery}} in the geo3d world.
> Ideally, I think we'd have a {{Geo3DPoint.newDistanceQuery}}?  And it would 
> take degrees, not radians, and radiusMeters, not an angle?
> And if I index and search using {{PlanetModel.SPHERE}} I think it should 
> ideally give the same results as {{LatLonPoint.newDistanceQuery}}, which uses 
> haversin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7150) geo3d public APIs should match the 2D apis?

2016-03-30 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-7150:

Attachment: LUCENE-7150.patch

Use the same convention as for 2D for polygons of repeating one point.

> geo3d public APIs should match the 2D apis?
> ---
>
> Key: LUCENE-7150
> URL: https://issues.apache.org/jira/browse/LUCENE-7150
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7150-sphere.patch, LUCENE-7150.patch, 
> LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch
>
>
> I'm struggling to benchmark the equivalent to 
> {{LatLonPoint.newDistanceQuery}} in the geo3d world.
> Ideally, I think we'd have a {{Geo3DPoint.newDistanceQuery}}?  And it would 
> take degrees, not radians, and radiusMeters, not an angle?
> And if I index and search using {{PlanetModel.SPHERE}} I think it should 
> ideally give the same results as {{LatLonPoint.newDistanceQuery}}, which uses 
> haversin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_72) - Build # 5746 - Still Failing!

2016-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5746/
Java: 64bit/jdk1.8.0_72 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.analysis.pt.TestPortugueseAnalyzer

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.pt.TestPortugueseAnalyzer_F1867C3ABAF2F8B5-001\bttc-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.pt.TestPortugueseAnalyzer_F1867C3ABAF2F8B5-001\bttc-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.pt.TestPortugueseAnalyzer_F1867C3ABAF2F8B5-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.pt.TestPortugueseAnalyzer_F1867C3ABAF2F8B5-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.pt.TestPortugueseAnalyzer_F1867C3ABAF2F8B5-001\bttc-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.pt.TestPortugueseAnalyzer_F1867C3ABAF2F8B5-001\bttc-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.pt.TestPortugueseAnalyzer_F1867C3ABAF2F8B5-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.pt.TestPortugueseAnalyzer_F1867C3ABAF2F8B5-001

at __randomizedtesting.SeedInfo.seed([F1867C3ABAF2F8B5]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:323)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 2321 lines...]
   [junit4] Suite: org.apache.lucene.analysis.pt.TestPortugueseAnalyzer
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene60), 
sim=RandomSimilarity(queryNorm=false,coord=yes): {dummy=IB LL-D2}, locale=ru, 
timezone=Africa/Algiers
   [junit4]   2> NOTE: Windows 10 10.0 amd64/Oracle Corporation 1.8.0_72 
(64-bit)/cpus=3,threads=1,free=79803344,total=127401984
   [junit4]   2> NOTE: All tests run in this JVM: [TestCJKAnalyzer, 
TestMappingCharFilter, TestArabicNormalizationFilter, TestArmenianAnalyzer, 
TestWordlistLoader, TestItalianLightStemFilter, 
TestSwedishLightStemFilterFactory, TestLithuanianStemming, 
TestPrefixAwareTokenFilter, TestGermanMinimalStemFilterFactory, 
TestStemmerOverrideFilterFactory, TestGermanNormalizationFilter, 
TestPersianAnalyzer, TestScandinavianNormalizationFilterFactory, 
TestCommonGramsQueryFilterFactory, TestPortugueseStemFilterFactory, 
TestStopFilter, TestPatternReplaceCharFilter, TestMappingCharFilterFactory, 
TestPatternTokenizer, TestFrenchMinimalStemFilterFactory, 
TestRemoveDuplicatesTokenFilter, TestGalicianStemFilter, TestArabicAnalyzer, 
TestUnicodeWhitespaceTokenizer, TestLimitTokenCountAnalyzer, 
TestGalicianMinimalStemFilter, TestPrefixAndSuffixAwareTokenFilter, 
TestCharacterUtils, TestZeroAffix2, TestArabicFilters, WikipediaTokenizerTest, 
EdgeNGramTokenFilterTest, TestPatternCaptureGroupTokenFilter, 
TestCzechAnalyzer, TypeAsPayloadTokenFilterTest, TestRussianLightStemFilter, 
TestPortugueseLightStemFilter, TestFilesystemResourceLoader, 
TestWordnetSynonymParser, TestCondition, TestFinnishAnalyzer, 
TestPortugueseAnalyzer]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestPortugueseAnalyzer -Dtests.seed=F1867C3ABAF2F8B5 
-Dtests.slow=true -Dtests.locale=ru 

[JENKINS] Lucene-Solr-NightlyTests-6.0 - Build # 1 - Failure

2016-03-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.0/1/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Timeout occured while waiting response from server at: https://127.0.0.1:60245

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:60245
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:382)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:498)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
  

[jira] [Commented] (LUCENE-7150) geo3d public APIs should match the 2D apis?

2016-03-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218349#comment-15218349
 ] 

Michael McCandless commented on LUCENE-7150:


Ahh OK phew :)

Yes, I think (?) the 2D APIs require this.  Can you fix the geo3d api to also 
require it (check it, then I guess remove it when forwarding to the geom impls)?

> geo3d public APIs should match the 2D apis?
> ---
>
> Key: LUCENE-7150
> URL: https://issues.apache.org/jira/browse/LUCENE-7150
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7150-sphere.patch, LUCENE-7150.patch, 
> LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch
>
>
> I'm struggling to benchmark the equivalent to 
> {{LatLonPoint.newDistanceQuery}} in the geo3d world.
> Ideally, I think we'd have a {{Geo3DPoint.newDistanceQuery}}?  And it would 
> take degrees, not radians, and radiusMeters, not an angle?
> And if I index and search using {{PlanetModel.SPHERE}} I think it should 
> ideally give the same results as {{LatLonPoint.newDistanceQuery}}, which uses 
> haversin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7150) geo3d public APIs should match the 2D apis?

2016-03-30 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218313#comment-15218313
 ] 

Karl Wright edited comment on LUCENE-7150 at 3/30/16 5:12 PM:
--

bq. What does this mean...?

It means you have two adjacent points that are identical.

[~mikemccand] Looking at the values passed, it appears that the last point is 
the same as the first.  In geo3d it is unnecessary to specify the last point; 
it's implied.  The javadoc for polygons elsewhere did not make it clear that 
you needed to do this -- but is this how the API works for 2D?  If so it's easy 
to correct for.



was (Author: kwri...@metacarta.com):
bq. What does this mean...?

It means that your polygon has two adjacent sides that are essentially colinear 
with each other.  I'll have to think of a way to address this in a less ugly 
fashion.


> geo3d public APIs should match the 2D apis?
> ---
>
> Key: LUCENE-7150
> URL: https://issues.apache.org/jira/browse/LUCENE-7150
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Attachments: LUCENE-7150-sphere.patch, LUCENE-7150.patch, 
> LUCENE-7150.patch, LUCENE-7150.patch, LUCENE-7150.patch
>
>
> I'm struggling to benchmark the equivalent to 
> {{LatLonPoint.newDistanceQuery}} in the geo3d world.
> Ideally, I think we'd have a {{Geo3DPoint.newDistanceQuery}}?  And it would 
> take degrees, not radians, and radiusMeters, not an angle?
> And if I index and search using {{PlanetModel.SPHERE}} I think it should 
> ideally give the same results as {{LatLonPoint.newDistanceQuery}}, which uses 
> haversin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-03-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218339#comment-15218339
 ] 

Shawn Heisey commented on LUCENE-6938:
--

Thanks, [~mikemccand].  That last commit made the script run with a "5.5.1" 
argument.

One problem, though: It added the new LUCENE_5_5_1 version as already 
deprecated, which caused {{ant test -Dtestcase=TestVersion}} to fail.  Removing 
the deprecation allowed the test to pass.

> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master, 5.x
>
> Attachments: LUCENE-6938-1.patch, LUCENE-6938-wc-checker.patch, 
> LUCENE-6938-wc-checker.patch, LUCENE-6938.patch, LUCENE-6938.patch, 
> LUCENE-6938.patch, LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8903) Move SolrJ DateUtil to Extraction module as ExtractionDateUtil

2016-03-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218327#comment-15218327
 ] 

Steve Rowe commented on SOLR-8903:
--

+1, LGTM

FYI I had to apply the patch using IntelliJ - {{git apply}} (git v2.7.4) 
doesn't grok the file rename/move syntax that is used in this IntelliJ-produced 
patch - here's the errors I got:

{noformat}
error: 
solr/contrib/extraction/src/test/org/apache/solr/handler/extraction/TestExtractionDateUtil.java:
 No such file or directory
error: 
solr/contrib/extraction/src/java/org/apache/solr/handler/extraction/ExtractionDateUtil.java:
 No such file or directory
{noformat}

Also, this patch (and the SOLR-8904 patch) don't apply at the top-level of the 
project - I had to run {{git apply}} under {{solr/}}.

> Move SolrJ DateUtil to Extraction module as ExtractionDateUtil
> --
>
> Key: SOLR-8903
> URL: https://issues.apache.org/jira/browse/SOLR-8903
> Project: Solr
>  Issue Type: Task
>  Components: SolrJ
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.0
>
> Attachments: SOLR_8903.patch, SOLR_8903_DateUtil_deprecate.patch
>
>
> SolrJ doesn't need a DateUtil class, particularly since we're on Java 8 and 
> can simply use {{new Date(Instant.parse(d).toEpochMilli());}} for parsing and 
> {{DateTimeFormatter.ISO_INSTANT.format(d.toInstant())}} for formatting.  Yes, 
> they are threadsafe.  I propose that we deprecate DateUtil from SolrJ, or 
> perhaps outright remove it from SolrJ for Solr 6.  The only SolrJ calls into 
> this class are to essentially use it to format or parse in the ISO standard 
> format.
> I also think we should move it to the "extraction" (SolrCell) module and name 
> it something like ExtractionDateUtil.  See, this class has a parse method 
> taking a list of formats, and there's a static list of them taken from 
> HttpClient's DateUtil.  DateUtil's original commit was SOLR-284 to be used by 
> SolrCell, and SolrCell wants this feature.  So I think it should move there.
> There are a few other uses:
> * Morphlines uses it, but morphlines depends on the extraction module so it 
> could just as well access it if we move it there.
> * The ValueAugmenterFactory (a doc transformer).  I really doubt whoever 
> added it realized that DateUtil.parseDate would try a bunch of formats 
> instead of only supporting the ISO canonical format.  So I think we should 
> just remove this reference.
> * DateFormatUtil.parseMathLenient falls back on this, and this method is in 
> turn called by just one caller -- DateValueSourceParser, registered as 
> {{ms}}.  I don't think we need leniency in use of this function query; values 
> given to ms should be computer generated in the ISO format.
> 
> edit: added ms().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >