[jira] [Comment Edited] (SOLR-7925) Implement indexing from gzip format file

2015-09-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738274#comment-14738274
 ] 

Song Hyonwoo edited comment on SOLR-7925 at 9/10/15 6:57 AM:
-

This patch will help to save network bandwidth when you update file to remote 
solr server.
If you need to update big file frequently to remote solr, you can update the 
file as gzipped format with this patch. 
If your system's network traffic is quite busy, this patch is useful to save 
network bandwidth.

You can test it like this.
$ cd solr/core
$ ant test -Dtestcase=GZipCompressedUpdateRequestHandlerTest

Thanks.



was (Author: song hyonwoo):
This patch will help to save network bandwidth when you update file to remote 
solr server.
If you need to update big file frequently to remote solr, you can update the 
file as gzipped format with this patch. 
If your system's network traffic is quite busy this patch is useful to save 
network bandwidth.

You can test it like this.
$ cd solr/core
$ ant test -Dtestcase=GZipCompressedUpdateRequestHandlerTest

Thanks.


> Implement indexing from gzip format file
> 
>
> Key: SOLR-7925
> URL: https://issues.apache.org/jira/browse/SOLR-7925
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 5.2.1
>Reporter: Song Hyonwoo
>Priority: Minor
>  Labels: patch
> Attachments: SOLR-7925.patch
>
>
> This will support the update of gzipped format file of Json, Xml and CSV.
> The request path will use "update/compress/gzip" instead of "update" with 
> "update.contentType" parameter  and  "Content-Type: application/gzip" as 
> Header field.
> The following is sample request using curl command. (use not --data but 
> --data-binary)
> curl 
> "http://localhost:8080/solr/collection1/update/compress/gzip?update.contentType=application/json=true;
>  -H 'Content-Type: application/gzip' --data-binary @data.json.gz
> To activate this function need to add following request handler information 
> to solrconfig.xml
>class="org.apache.solr.handler.CompressedUpdateRequestHandler">
> 
>   application/gzip
> 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7833) Add new Solr book 'Solr Cookbook - Third Edition' to selection of Solr books and news.

2015-09-10 Thread Zico Fernandes (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zico Fernandes updated SOLR-7833:
-
Description: 
Rafał Kuć is proud to finally announce the book Solr Cookbook - Third Edition 
by Packt Publishing. This edition will specifically appeal to developers who 
wish to quickly get to grips with the changes and new features of Apache Solr 
5. 
Solr Cookbook - Third Edition has over 100 easy to follow recipes to solve 
real-time problems related to Apache Solr 4.x and 5.0 effectively. Starting 
with vital information on setting up Solr, the developer will quickly progress 
to analyzing their text data through querying and performance improvement. 
Finally, they will explore real-life situations, where Solr can be used to 
simplify daily collection handling.
With numerous practical chapters centered on important Solr techniques and 
methods Solr Cookbook - Third Edition will guide intermediate Solr Developers 
who are willing to learn and implement Pro-level practices, techniques, and 
solutions.
Click here to read more about the Solr Cookbook - Third Edition: 
http://bit.ly/1Q2AGS8

  was:
Rafał Kuć is proud to finally announce the book Solr Cookbook - Third Edition 
by Packt Publishing. This edition will specifically appeal to developers who 
wish to quickly get to grips with the changes and new features of Apache Solr 
5. 
Solr Cookbook - Third Edition has over 100 easy to follow recipes to solve 
real-time problems related to Apache Solr 4.x and 5.0 effectively. Starting 
with vital information on setting up Solr, the developer will quickly progress 
to analyzing their text data through querying and performance improvement. 
Finally, they will explore real-life situations, where Solr can be used to 
simplify daily collection handling.
With numerous practical chapters centered on important Solr techniques and 
methods Solr Cookbook - Third Edition will guide intermediate Solr Developers 
who are willing to learn and implement Pro-level practices, techniques, and 
solutions.
Click here to read more about the Solr Cookbook - Third Edition: 
https://www.packtpub.com/big-data-and-business-intelligence/solr-cookbook-third-edition


> Add new Solr book 'Solr Cookbook - Third Edition' to selection of Solr books 
> and news.
> --
>
> Key: SOLR-7833
> URL: https://issues.apache.org/jira/browse/SOLR-7833
> Project: Solr
>  Issue Type: Task
>Reporter: Zico Fernandes
> Attachments: SOLR-7833.patch, Solr Cookbook_Third Edition.jpg, 
> book_solr_cookbook_3ed.jpg
>
>
> Rafał Kuć is proud to finally announce the book Solr Cookbook - Third Edition 
> by Packt Publishing. This edition will specifically appeal to developers who 
> wish to quickly get to grips with the changes and new features of Apache Solr 
> 5. 
> Solr Cookbook - Third Edition has over 100 easy to follow recipes to solve 
> real-time problems related to Apache Solr 4.x and 5.0 effectively. Starting 
> with vital information on setting up Solr, the developer will quickly 
> progress to analyzing their text data through querying and performance 
> improvement. Finally, they will explore real-life situations, where Solr can 
> be used to simplify daily collection handling.
> With numerous practical chapters centered on important Solr techniques and 
> methods Solr Cookbook - Third Edition will guide intermediate Solr Developers 
> who are willing to learn and implement Pro-level practices, techniques, and 
> solutions.
> Click here to read more about the Solr Cookbook - Third Edition: 
> http://bit.ly/1Q2AGS8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Solaris (multiarch/jdk1.7.0) - Build # 39 - Failure!

2015-09-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/39/
Java: multiarch/jdk1.7.0 -d32 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 11548 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/usr/jdk/instances/jdk1.7.0/jre/bin/java -d32 -client -XX:+UseG1GC 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/heapdumps
 -XX:MaxPermSize=192m -ea -esa -Dtests.prefix=tests 
-Dtests.seed=77D96FB0945D13C1 -Xmx512M -Dtests.iters= -Dtests.verbose=false 
-Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=5.4.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/solr/build/solr-core/test/temp
 -Dcommon.dir=/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/lucene 
-Dclover.db.dir=/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/lucene/build/clover/db
 
-Djava.security.policy=/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=5.4.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.leaveTemporary=false -Dtests.filterstacks=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=ISO-8859-1 -classpath 

[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b78) - Build # 13893 - Still Failing!

2015-09-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13893/
Java: 64bit/jdk1.9.0-ea-b78 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.lucene.index.TestIndexWriterExceptions.testNoLostDeletesOrUpdates

Error Message:
this IndexWriter is closed

Stack Trace:
org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
at 
__randomizedtesting.SeedInfo.seed([3422F58316636777:5D59F708E0CDAD37]:0)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:719)
at org.apache.lucene.index.IndexWriter.getConfig(IndexWriter.java:1046)
at 
org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:300)
at 
org.apache.lucene.index.TestIndexWriterExceptions.testNoLostDeletesOrUpdates(TestIndexWriterExceptions.java:2079)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:504)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:746)
Caused by: org.apache.lucene.store.MockDirectoryWrapper$FakeIOException
at 
org.apache.lucene.index.TestIndexWriterExceptions$11.eval(TestIndexWriterExceptions.java:1923)
at 
org.apache.lucene.store.MockDirectoryWrapper.maybeThrowDeterministicException(MockDirectoryWrapper.java:958)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:634)
at 

[jira] [Commented] (LUCENE-6778) Add GeoPointDistanceRangeQuery support for GeoPointField types

2015-09-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738424#comment-14738424
 ] 

Michael McCandless commented on LUCENE-6778:


This looks nice [~nknize]!  I like how simple the query is, just rewriting to a 
BQ that excludes the {{minRadius}} distance points.

When the {{radius}} is too big, instead of doing the "whole world bbox query", 
couldn't we just do a {{MatchAllDocsQuery}}?  Should be the same thing but 
faster?

I know BQ effectively rewrites correctly, but instead of 
{{BooleanClause.Occur.SHOULD}} can you use {{MUST}}, for the outer radius 
query?  It just makes it clear that we have a {{MUST}} and a {{MUST_NOT}} 
clause.

In the randomized test, instead of using the bbox to derive a radius, why not 
just make a random radius to begin with (this is pre-existing)?

Can you please make this a an if statement instead?

{noformat}
+query = (rangeQuery) ? new 
GeoPointDistanceRangeQuery(FIELD_NAME, centerLon, centerLat, radius, radiusMax) 
:
+new GeoPointDistanceQuery(FIELD_NAME, centerLon, 
centerLat, radius);
{noformat}

Thanks.

> Add GeoPointDistanceRangeQuery support for GeoPointField types
> --
>
> Key: LUCENE-6778
> URL: https://issues.apache.org/jira/browse/LUCENE-6778
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
> Attachments: LUCENE-6778.patch
>
>
> GeoPointDistanceQuery currently handles a single point distance. This 
> improvement adds a GeoPointDistanceRangeQuery for supporting use cases such 
> as: find all points between 10km and 20km of a known location. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-8027) Reference guide instructions for converting an existing install to SolrCloud

2015-09-10 Thread Varun Thacker
Hi Erick,

Your comment does not reflect on the Jira.

I also updated the MERGEINDEXES documentation (
https://cwiki.apache.org/confluence/display/solr/Merging+Indexes ) to
reflect that the WAR is pre-extracted from Solr 5.3 onwards.

On Thu, Sep 10, 2015 at 5:39 AM, Erick Erickson 
wrote:

> Hmmm.. It'd be great to have this documented!
>
> I gave this a quick shot just to see if it'd do what I'd expect and
> it's not actually that hard:
>
> 0> created the "techproducts" non-solr-cloud collection
> 1> shut down Solr
> 2> moved the entire directory "somewhere else", not in the Solr tree
> to simulate, say, bringing it over from some other machine.
> 3> brought up ZK and pushed the configuration file up
> 4> started SolrCloud (nothing is in it as you'd expect)
> 5> created a new collection with the config from step <3> (name irrelevant)
> 6> shut down the cloud
> 7> Copied just the _contents_ of the index directory from step <0> to
> the index directory created in <5>
> 8> restarted SolrCloud
>
> And all was well.
>
> I also tried just creating a new collection (1 shard) and using
> MERGEINDEXES with the indexDir option which also worked. I think I
> like that a little better, there are fewer places to mess things up,
> and it doesn't require bouncing SolrCloud. The first time I tried it I
> didn't manage to issue the commit, so that should be called out. Also
> should call out checking that the doc count is right since if a person
> gets nervous and issues the merge N times you have Nx the docs...
>
> You'd want ADDREPLICAs once you were satisfied you'd moved the index
> correctly of course. And hope that the config you pushed up was
> actually OK. Perhaps something here about just moving the relevant
> parts of schema.xml rather than the whole (old) config dir? Or maybe
> even proofing things out on 5x first?
>
> Of course, all this assuming you couldn't just re-index fresh ;).
>
> FWIW,
> Erick
>
>
>
> On Wed, Sep 9, 2015 at 4:31 PM, Shawn Heisey (JIRA) 
> wrote:
> >
> > [
> https://issues.apache.org/jira/browse/SOLR-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14737799#comment-14737799
> ]
> >
> > Shawn Heisey commented on SOLR-8027:
> > 
> >
> > I can try out some things tonight when I get home, assuming the honeydew
> list is not extreme.
> >
> >> Reference guide instructions for converting an existing install to
> SolrCloud
> >>
> 
> >>
> >> Key: SOLR-8027
> >> URL: https://issues.apache.org/jira/browse/SOLR-8027
> >> Project: Solr
> >>  Issue Type: Improvement
> >>  Components: documentation
> >>Reporter: Shawn Heisey
> >>
> >> I have absolutely no idea where to begin with this, but it's a definite
> hole in our documentation.  I'd like to have some instructions that will
> help somebody convert a non-cloud install to SolrCloud.  Ideally they would
> start with a typical directory structure with one or more cores and end
> with cores named foo_shardN_replicaM.
> >> As far as I know, Solr doesn't actually let non-cloud cores coexist
> with cloud cores.  I once tried to create a non-cloud core on a cloud
> install, and couldn't do it.
> >
> >
> >
> > --
> > This message was sent by Atlassian JIRA
> > (v6.3.4#6332)
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 


Regards,
Varun Thacker


Re: [JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b78) - Build # 13893 - Still Failing!

2015-09-10 Thread Michael McCandless
I'll dig ... I suspect it's caused by the recent change that IW now
considers any unexpected merge exceptions to be tragic.

Mike McCandless

http://blog.mikemccandless.com


On Thu, Sep 10, 2015 at 2:59 AM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13893/
> Java: 64bit/jdk1.9.0-ea-b78 -XX:-UseCompressedOops -XX:+UseG1GC
>
> 1 tests failed.
> FAILED:  
> org.apache.lucene.index.TestIndexWriterExceptions.testNoLostDeletesOrUpdates
>
> Error Message:
> this IndexWriter is closed
>
> Stack Trace:
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
> at 
> __randomizedtesting.SeedInfo.seed([3422F58316636777:5D59F708E0CDAD37]:0)
> at 
> org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:719)
> at 
> org.apache.lucene.index.IndexWriter.getConfig(IndexWriter.java:1046)
> at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:300)
> at 
> org.apache.lucene.index.TestIndexWriterExceptions.testNoLostDeletesOrUpdates(TestIndexWriterExceptions.java:2079)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:504)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
> at java.lang.Thread.run(Thread.java:746)
> Caused by: 

[jira] [Updated] (LUCENE-6698) Add BKDDistanceQuery

2015-09-10 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6698:
---
Attachment: LUCENE-6698.patch

Work-in-progress patch, but the test fails, I *think* because 
{{GeoUtils.rectCrossesCircle}} is buggy.

I reduced it to this small test case which I think should pass, unless I'm 
using the {[GeoUtils}} API incorrectly?:

{noformat}
  public void testRectCrossesCircle() throws Exception {
assertTrue(GeoUtils.rectCrossesCircle(-180, -90, 180, 0.0, 0.667, 0.0, 
88000.0));
  }
{noformat}


> Add BKDDistanceQuery
> 
>
> Key: LUCENE-6698
> URL: https://issues.apache.org/jira/browse/LUCENE-6698
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: LUCENE-6698.patch
>
>
> Our BKD tree impl should be very fast at doing "distance from lat/lon center 
> point < X" query.
> I haven't started this ... [~nknize] expressed interest in working on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 953 - Still Failing

2015-09-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/953/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.index.IndexSortingTest

Error Message:
access denied ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")

Stack Trace:
java.security.AccessControlException: access denied 
("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")
at __randomizedtesting.SeedInfo.seed([4146977D8265D175]:0)
at 
java.security.AccessControlContext.checkPermission(AccessControlContext.java:372)
at 
java.security.AccessController.checkPermission(AccessController.java:559)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at 
java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1525)
at java.lang.Class.checkPackageAccess(Class.java:2309)
at java.lang.Class.checkMemberAccess(Class.java:2289)
at java.lang.Class.getDeclaredFields(Class.java:1810)
at 
com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:573)
at 
com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSizeOf(RamUsageEstimator.java:537)
at 
com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(RamUsageEstimator.java:385)
at 
com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:108)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 8052 lines...]
   [junit4] Suite: org.apache.lucene.index.IndexSortingTest
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene53): 
{term_vectors=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
id=PostingsFormat(name=MockRandom), positions=FSTOrd50, 
docs=PostingsFormat(name=MockRandom), norm=PostingsFormat(name=Memory 
doPackFST= true)}, docValues:{sorted_set=DocValuesFormat(name=Memory), 
numeric=DocValuesFormat(name=Lucene50), binary=DocValuesFormat(name=Lucene50), 
sorted_numeric=DocValuesFormat(name=Asserting), 
sorted=DocValuesFormat(name=Lucene50)}, sim=DefaultSimilarity, locale=be_BY, 
timezone=America/Los_Angeles
   [junit4]   2> NOTE: Linux 3.13.0-52-generic amd64/Oracle Corporation 
1.7.0_72 (64-bit)/cpus=4,threads=1,free=188195696,total=326107136
   [junit4]   2> NOTE: All tests run in this JVM: [TestFieldCacheReopen, 
TestFieldCacheWithThreads, TestFieldCacheSortRandom, TestNumericTerms32, 
TestIndexSplitter, TestPKIndexSplitter, TestMultiPassIndexSplitter, 
TestHighFreqTerms, SweetSpotSimilarityTest, TestLazyDocument, IndexSortingTest]
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=IndexSortingTest 
-Dtests.seed=4146977D8265D175 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=be_BY -Dtests.timezone=America/Los_Angeles -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   0.00s J1 | IndexSortingTest (suite) <<<
   [junit4]> Throwable #1: java.security.AccessControlException: access 
denied ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([4146977D8265D175]:0)
   [junit4]>at 
java.security.AccessControlContext.checkPermission(AccessControlContext.java:372)
   [junit4]>at 
java.security.AccessController.checkPermission(AccessController.java:559)
   [junit4]>at 
java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
   [junit4]>at 
java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1525)
   [junit4]>at java.lang.Class.checkPackageAccess(Class.java:2309)
   [junit4]>at java.lang.Class.checkMemberAccess(Class.java:2289)
   [junit4]>at java.lang.Class.getDeclaredFields(Class.java:1810)
   

Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 953 - Still Failing

2015-09-10 Thread Dawid Weiss
RamUsageEstimator tries to measure something that is doesn't have
access to, huh?

java.security.AccessControlException: access denied
("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")
at __randomizedtesting.SeedInfo.seed([4146977D8265D175]:0)
at 
java.security.AccessControlContext.checkPermission(AccessControlContext.java:372)
at 
java.security.AccessController.checkPermission(AccessController.java:559)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at 
java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1525)
at java.lang.Class.checkPackageAccess(Class.java:2309)
at java.lang.Class.checkMemberAccess(Class.java:2289)
at java.lang.Class.getDeclaredFields(Class.java:1810)
at 
com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:573)

On Thu, Sep 10, 2015 at 11:49 AM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/953/
>
> 1 tests failed.
> FAILED:  junit.framework.TestSuite.org.apache.lucene.index.IndexSortingTest
>
> Error Message:
> access denied ("java.lang.RuntimePermission" 
> "accessClassInPackage.sun.nio.ch")
>
> Stack Trace:
> java.security.AccessControlException: access denied 
> ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")
> at __randomizedtesting.SeedInfo.seed([4146977D8265D175]:0)
> at 
> java.security.AccessControlContext.checkPermission(AccessControlContext.java:372)
> at 
> java.security.AccessController.checkPermission(AccessController.java:559)
> at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
> at 
> java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1525)
> at java.lang.Class.checkPackageAccess(Class.java:2309)
> at java.lang.Class.checkMemberAccess(Class.java:2289)
> at java.lang.Class.getDeclaredFields(Class.java:1810)
> at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:573)
> at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSizeOf(RamUsageEstimator.java:537)
> at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(RamUsageEstimator.java:385)
> at 
> com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:108)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>
> Build Log:
> [...truncated 8052 lines...]
>[junit4] Suite: org.apache.lucene.index.IndexSortingTest
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene53): 
> {term_vectors=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> id=PostingsFormat(name=MockRandom), positions=FSTOrd50, 
> docs=PostingsFormat(name=MockRandom), norm=PostingsFormat(name=Memory 
> doPackFST= true)}, docValues:{sorted_set=DocValuesFormat(name=Memory), 
> numeric=DocValuesFormat(name=Lucene50), 
> binary=DocValuesFormat(name=Lucene50), 
> sorted_numeric=DocValuesFormat(name=Asserting), 
> sorted=DocValuesFormat(name=Lucene50)}, sim=DefaultSimilarity, locale=be_BY, 
> timezone=America/Los_Angeles
>[junit4]   2> NOTE: Linux 3.13.0-52-generic amd64/Oracle Corporation 
> 1.7.0_72 (64-bit)/cpus=4,threads=1,free=188195696,total=326107136
>[junit4]   2> NOTE: All tests run in this JVM: [TestFieldCacheReopen, 
> TestFieldCacheWithThreads, TestFieldCacheSortRandom, TestNumericTerms32, 
> TestIndexSplitter, TestPKIndexSplitter, TestMultiPassIndexSplitter, 
> TestHighFreqTerms, SweetSpotSimilarityTest, TestLazyDocument, 
> IndexSortingTest]
>[junit4]   2> NOTE: download the large Jenkins line-docs file by running 
> 'ant get-jenkins-line-docs' in the lucene directory.
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=IndexSortingTest 
> -Dtests.seed=4146977D8265D175 -Dtests.multiplier=2 

[jira] [Assigned] (SOLR-8026) DistribJoinFromCollectionTest test failures

2015-09-10 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-8026:
--

Assignee: Mikhail Khludnev

> DistribJoinFromCollectionTest test failures
> ---
>
> Key: SOLR-8026
> URL: https://issues.apache.org/jira/browse/SOLR-8026
> Project: Solr
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Joel Bernstein
>Assignee: Mikhail Khludnev
> Attachments: SOLR-8026.patch
>
>
> Trunk DistribJoinFromCollectionTest is failing for me locally and appears be 
> failing on jenkins as well. Here is the error from my local machine. 
> [junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=DistribJoinFromCollectionTest -Dtests.method=test 
> -Dtests.seed=5C8C1B007BE0841E -Dtests.slow=true -Dtests.locale=it 
> -Dtests.timezone=Australia/Melbourne -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 44.4s | DistribJoinFromCollectionTest.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError: 
>[junit4]> Expected: not "1.0"
>[junit4]>  got: "1.0"
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([5C8C1B007BE0841E:D4D824DAD51CE9E6]:0)
>[junit4]>  at 
> org.apache.solr.cloud.DistribJoinFromCollectionTest.assertScore(DistribJoinFromCollectionTest.java:170)
>[junit4]>  at 
> org.apache.solr.cloud.DistribJoinFromCollectionTest.testJoins(DistribJoinFromCollectionTest.java:132)
>[junit4]>  at 
> org.apache.solr.cloud.DistribJoinFromCollectionTest.test(DistribJoinFromCollectionTest.java:100)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> 44372 INFO  
> (SUITE-DistribJoinFromCollectionTest-seed#[5C8C1B007BE0841E]-worker) 
> [n:127.0.0.1:59399_ c:from_1x2 s:shard1 r:core_node2 
> x:from_1x2_shard1_replica2] o.a.s.SolrTestCaseJ4 ###deleteCore



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7819) ZkController.ensureReplicaInLeaderInitiatedRecovery does not respect retryOnConnLoss

2015-09-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738572#comment-14738572
 ] 

ASF subversion and git services commented on SOLR-7819:
---

Commit 1702213 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1702213 ]

SOLR-7819: ZK connection loss or session timeout do not stall indexing threads 
anymore and LIR activity is moved to a background thread

> ZkController.ensureReplicaInLeaderInitiatedRecovery does not respect 
> retryOnConnLoss
> 
>
> Key: SOLR-7819
> URL: https://issues.apache.org/jira/browse/SOLR-7819
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2, 5.2.1
>Reporter: Shalin Shekhar Mangar
>  Labels: Jepsen
> Fix For: Trunk, 5.4
>
> Attachments: SOLR-7819.patch, SOLR-7819.patch, SOLR-7819.patch, 
> SOLR-7819.patch, SOLR-7819.patch, SOLR-7819.patch
>
>
> SOLR-7245 added a retryOnConnLoss parameter to 
> ZkController.ensureReplicaInLeaderInitiatedRecovery so that indexing threads 
> do not hang during a partition on ZK operations. However, some of those 
> changes were unintentionally reverted by SOLR-7336 in 5.2.
> I found this while running Jepsen tests on 5.2.1 where a hung update managed 
> to put a leader into a 'down' state (I'm still investigating and will open a 
> separate issue about this problem).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7819) ZkController.ensureReplicaInLeaderInitiatedRecovery does not respect retryOnConnLoss

2015-09-10 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-7819.
-
Resolution: Fixed
  Assignee: Shalin Shekhar Mangar

Thanks Ramkumar for the review.

> ZkController.ensureReplicaInLeaderInitiatedRecovery does not respect 
> retryOnConnLoss
> 
>
> Key: SOLR-7819
> URL: https://issues.apache.org/jira/browse/SOLR-7819
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.2, 5.2.1
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>  Labels: Jepsen
> Fix For: Trunk, 5.4
>
> Attachments: SOLR-7819.patch, SOLR-7819.patch, SOLR-7819.patch, 
> SOLR-7819.patch, SOLR-7819.patch, SOLR-7819.patch
>
>
> SOLR-7245 added a retryOnConnLoss parameter to 
> ZkController.ensureReplicaInLeaderInitiatedRecovery so that indexing threads 
> do not hang during a partition on ZK operations. However, some of those 
> changes were unintentionally reverted by SOLR-7336 in 5.2.
> I found this while running Jepsen tests on 5.2.1 where a hung update managed 
> to put a leader into a 'down' state (I'm still investigating and will open a 
> separate issue about this problem).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 5.3.1 bug fix release

2015-09-10 Thread Noble Paul
I'm cutting a tag by tomorrow

On Wed, Sep 9, 2015 at 11:13 PM, Shawn Heisey  wrote:
> On 9/8/2015 7:19 AM, Noble Paul wrote:
>>
>> I would like to start the process ASAP.  I volunteer to be the RM.
>> Please let me know the list of tickets you would like to include in
>> the release and we can coordinate the rest
>>
>
> I committed SOLR-6188, then had unexpected test failures, so I reverted
> it.  I have no more issues for inclusion in 5.3.1.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>



-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738711#comment-14738711
 ] 

Noble Paul commented on SOLR-8029:
--

[~elyograg] We could deploy solr at the root context {{/}} that means {{/solr}} 
and {{/solr2}} will become paths controlled by Solr. 

The UI could live at a separate path however 

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_80) - Build # 5112 - Failure!

2015-09-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5112/
Java: 32bit/jdk1.7.0_80 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=, name=collection0, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=, name=collection0, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
at 
__randomizedtesting.SeedInfo.seed([9400346C348B7ECE:1C540BB69A771336]:0)
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:59017: Could not find collection : 
awholynewstresscollection_collection0_0
at __randomizedtesting.SeedInfo.seed([9400346C348B7ECE]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:895)




Build Log:
[...truncated 9808 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.CollectionsAPIDistributedZkTest_9400346C348B7ECE-001\init-core-data-001
   [junit4]   2> 347078 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[9400346C348B7ECE]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 347078 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[9400346C348B7ECE]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 347084 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[9400346C348B7ECE]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 347085 INFO  (Thread-846) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 347085 INFO  (Thread-846) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 347181 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[9400346C348B7ECE]) [] 
o.a.s.c.ZkTestServer start zk server on port:58989
   [junit4]   2> 347181 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[9400346C348B7ECE]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 347183 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[9400346C348B7ECE]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 347187 INFO  (zkCallback-703-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@d376ad name:ZooKeeperConnection 
Watcher:127.0.0.1:58989 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 347188 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[9400346C348B7ECE]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 347188 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[9400346C348B7ECE]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 347188 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[9400346C348B7ECE]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 347191 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[9400346C348B7ECE]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 347195 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[9400346C348B7ECE]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 347197 INFO  (zkCallback-704-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@17f name:ZooKeeperConnection 
Watcher:127.0.0.1:58989/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2> 347197 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[9400346C348B7ECE]) [] 
o.a.s.c.c.ConnectionManager 

[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738705#comment-14738705
 ] 

Shawn Heisey commented on SOLR-8029:


I have no real wish to derail your plan, but I wondered about a possible 
wrinkle:  In order to have a /solr2 URL work, doesn't that require a completely 
separate context, and therefore a separate application from Jetty's point of 
view?  If this is true, are there any problems in getting the two to work 
together?  They would be in the same JVM, but for general security concerns I 
would hope that the servlet API keeps them pretty separate.

Something to think about ... I wonder if maybe paths under /CONTEXT/api (where 
CONTEXT is defined in the context fragment for the container and is normally 
"solr") would be a better way to separate this out.  At that point, you could 
put the new angular UI on /CONTEXT/ui.  Having separate and clear URLs for ui 
and api would make it a lot easier for a user to know that they can't put a ui 
URL into a program that expects to talk to the api.

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6793) NumericRangeQuery.hashCode() produces frequent collisions

2015-09-10 Thread J.B. Langston (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.B. Langston updated LUCENE-6793:
--
Description: 
We have a user who is developing a Solr plugin and needs to store 
NumericRangeQuery objects in a hash table.  They found that 
NumericRangeQuery.hashCode() produces extremely frequent collisions.  I 
understand that the contract for hashCode doesn't (and can't) guarantee unique 
hash codes for every value, but the distribution of this method seems 
particularly bad with an affinity for the hash value 897548010. Out of a set of 
31 ranges, hashCode returned 897548010 14 times. This is going to result in 
very inefficient distribution of the objects in the hash table. The standard 
"times 31" hash function recommended by Effective Java fares quite a bit 
better, although it still produces quite a few collisions.  Here's a test case 
that compares the results of the current hashCode function with the times 31 
method.  An even better method, like Murmur3 might be found here: 
http://floodyberry.com/noncryptohashzoo/

{code}
package com.company;

import org.apache.lucene.search.NumericRangeQuery;

public class Main {

public static int betterHash(NumericRangeQuery query) {
// I can't subclass NumericRangeQuery since it's constructor is 
private, so I can't call super and
// had to copy and paste from the hashCode method for both 
MultiTermQuery and NumericRangeQuery

// MultiTermQuery.hashCode (copied verbatim)
final int prime = 31;
int result = 1;
result = prime * result + Float.floatToIntBits(query.getBoost());
result = prime * result + query.getRewriteMethod().hashCode();
if (query.getField() != null) result = prime * result + 
query.getField().hashCode();

// NumericRangeQuery.hashCode (changed XOR with random constant to 
times 31)
result = result * prime + query.getPrecisionStep();
if (query.getMin() != null) result = result * prime + 
query.getMin().hashCode();
if (query.getMax() != null) result = result * prime + 
query.getMax().hashCode();
result = result * prime + 
Boolean.valueOf(query.includesMin()).hashCode();
result = result * prime + 
Boolean.valueOf(query.includesMax()).hashCode();
return result;
}

public static void main(String[] args) {
long previous = Long.MIN_VALUE;
long[] list = {
-9223372036854775798L,
-8608480567731124078L,
-7993589098607472357L,
-7378697629483820637L,
-6763806160360168916L,
-6148914691236517196L,
-5534023222112865475L,
-4919131752989213755L,
-4304240283865562034L,
-3689348814741910314L,
-3074457345618258593L,
-2459565876494606873L,
-1844674407370955152L,
-1229782938247303432L,
-614891469123651711L,
10L,
614891469123651730L,
1229782938247303451L,
1844674407370955171L,
2459565876494606892L,
3074457345618258612L,
3689348814741910333L,
4304240283865562053L,
4919131752989213774L,
5534023222112865494L,
6148914691236517215L,
6763806160360168935L,
7378697629483820656L,
7993589098607472376L,
8608480567731124097L,
Long.MAX_VALUE
};

for (long current : list) {
NumericRangeQuery query =  
NumericRangeQuery.newLongRange("_token_long", 8, previous, current, true, true);
System.out.println("[" + previous + " TO " + current + "]: " + 
query.hashCode() + " / " + betterHash(query));
previous = current + 1;
}
}
}
{code}

  was:
We have a user who is developing a Solr plugin and needs to store 
NumericRangeQuery objects in a hash table.  They found that 
NumericRangeQuery.hashCode() produces extremely frequent collisions.  I 
understand that the contract for hashCode doesn't (and can't) guarantee unique 
hash codes for every value, but the distribution of this method seems 
particularly bad with an affinity for the hash value 897548010. Out of a set of 
31 ranges, hashCode returned 897548010 14 times. This is going to result in 
very inefficient distribution of the objects in the hash table. The standard 
"times 31" hash function recommended by Effective Java fares quite a bit 
better, although it still produces quite a few collisions.  Here's a test case 
that compares the results of the current hashCode function with the times 31 
method.  An even better method, like Murmur3 might be found here: 
http://floodyberry.com/noncryptohashzoo/

{code}
package com.company;

import 

Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 953 - Still Failing

2015-09-10 Thread Robert Muir
its a static leak in Nightly test. I will fix it.

On Thu, Sep 10, 2015 at 5:59 AM, Dawid Weiss  wrote:
> RamUsageEstimator tries to measure something that is doesn't have
> access to, huh?
>
> java.security.AccessControlException: access denied
> ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")
> at __randomizedtesting.SeedInfo.seed([4146977D8265D175]:0)
> at 
> java.security.AccessControlContext.checkPermission(AccessControlContext.java:372)
> at 
> java.security.AccessController.checkPermission(AccessController.java:559)
> at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
> at 
> java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1525)
> at java.lang.Class.checkPackageAccess(Class.java:2309)
> at java.lang.Class.checkMemberAccess(Class.java:2289)
> at java.lang.Class.getDeclaredFields(Class.java:1810)
> at 
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:573)
>
> On Thu, Sep 10, 2015 at 11:49 AM, Apache Jenkins Server
>  wrote:
>> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/953/
>>
>> 1 tests failed.
>> FAILED:  junit.framework.TestSuite.org.apache.lucene.index.IndexSortingTest
>>
>> Error Message:
>> access denied ("java.lang.RuntimePermission" 
>> "accessClassInPackage.sun.nio.ch")
>>
>> Stack Trace:
>> java.security.AccessControlException: access denied 
>> ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")
>> at __randomizedtesting.SeedInfo.seed([4146977D8265D175]:0)
>> at 
>> java.security.AccessControlContext.checkPermission(AccessControlContext.java:372)
>> at 
>> java.security.AccessController.checkPermission(AccessController.java:559)
>> at 
>> java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
>> at 
>> java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1525)
>> at java.lang.Class.checkPackageAccess(Class.java:2309)
>> at java.lang.Class.checkMemberAccess(Class.java:2289)
>> at java.lang.Class.getDeclaredFields(Class.java:1810)
>> at 
>> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:573)
>> at 
>> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSizeOf(RamUsageEstimator.java:537)
>> at 
>> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(RamUsageEstimator.java:385)
>> at 
>> com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:108)
>> at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
>> at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at 
>> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
>> at 
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>> at 
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
>> at 
>> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
>> at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
>> at java.lang.Thread.run(Thread.java:745)
>>
>>
>>
>>
>> Build Log:
>> [...truncated 8052 lines...]
>>[junit4] Suite: org.apache.lucene.index.IndexSortingTest
>>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene53): 
>> {term_vectors=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
>> id=PostingsFormat(name=MockRandom), positions=FSTOrd50, 
>> docs=PostingsFormat(name=MockRandom), norm=PostingsFormat(name=Memory 
>> doPackFST= true)}, docValues:{sorted_set=DocValuesFormat(name=Memory), 
>> numeric=DocValuesFormat(name=Lucene50), 
>> binary=DocValuesFormat(name=Lucene50), 
>> sorted_numeric=DocValuesFormat(name=Asserting), 
>> sorted=DocValuesFormat(name=Lucene50)}, sim=DefaultSimilarity, locale=be_BY, 
>> timezone=America/Los_Angeles
>>[junit4]   2> NOTE: Linux 3.13.0-52-generic amd64/Oracle Corporation 
>> 1.7.0_72 (64-bit)/cpus=4,threads=1,free=188195696,total=326107136
>>[junit4]   2> NOTE: All tests run in this JVM: [TestFieldCacheReopen, 
>> TestFieldCacheWithThreads, TestFieldCacheSortRandom, TestNumericTerms32, 
>> TestIndexSplitter, TestPKIndexSplitter, TestMultiPassIndexSplitter, 
>> TestHighFreqTerms, SweetSpotSimilarityTest, TestLazyDocument, 
>> IndexSortingTest]
>>[junit4]   2> NOTE: 

Introducing Alba, a small framework to simplify Solr plugins development

2015-09-10 Thread Leonardo Foderaro
Hi everyone,

this is my first post on this list and my first opensource project, so
please don't expect too much from either of them.

I've spent these last weeks trying to understand how to create Solr
plugins, so I started a simple project (a plugin itself) which evolved into
a small framework named Alba (the Italian word for 'sunrise'), aimed to
simplify their development. To summarize it, each plugin is just an
annotated method:

@AlbaPlugin(name="myPluginsLibrary")
public class MyPlugins {

@DocTransformer(name="helloworld")
 public void hello(SolrDocument doc) {
 doc.setField("message", "Hello, World!");
 }

@FunctionQuery(name="len", description="returns the length of a string")
 public Integer len(@Param(name="string", description="the string to
measure") String s) {
return s.length();
 }

}

and this is how you call it, assuming 'author' is a valid field in your
schema:

fl=[alba name="helloworld"],alba(len,string=author),message

Plugins currently supported are:

- FunctionQuery
- ResponseWriter
- RequestHandler
- SearchComponent
- DocTransformer

Of course it's far from being complete and I still have to learn a lot of
things about Solr (not to mention Java itself!), nontheless working on it
is a terrific learning experience for me, and I think it could evolve into
something useful.

At http://github.com/leonardofoderaro/ you can find the project with a
(still-in-progress) tutorial in the wiki and some related repos, e.g. the
plugins built in the tutorial or the script used to generate the sample
dataset.

I still have many questions about Solr, but first I'd like to ask you if
you think it's a good idea. Any feedback is very welcome.

Kind regards,
Leonardo


[jira] [Created] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-10 Thread ludovic Boutros (JIRA)
ludovic Boutros created SOLR-8030:
-

 Summary: Transaction log does not store the update chain used for 
updates
 Key: SOLR-8030
 URL: https://issues.apache.org/jira/browse/SOLR-8030
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.3
Reporter: ludovic Boutros


Transaction Log does not store the update chain used during updates.

Therefore tLog uses the default update chain during log replay.

If we implement custom update logic with multiple update chains, the log replay 
could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)
Noble Paul created SOLR-8029:


 Summary: Modernize and standardize Solr APIs
 Key: SOLR-8029
 URL: https://issues.apache.org/jira/browse/SOLR-8029
 Project: Solr
  Issue Type: Improvement
Affects Versions: 6.0
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 6.0


Solr APIs have organically evolved and they are sometimes inconsistent with 
each other or not in sync with the widely followed conventions of HTTP 
protocol. Trying to make incremental changes to make them modern is like 
applying band-aid. So, we have done a complete rethink of what the APIs should 
be. The most notable aspects of the API are as follows:
The new set of APIs will be placed under a new path {{/solr2}}. The legacy APIs 
will continue to work under the {{/solr}} path as they used to and they will be 
eventually deprecated.
There are 3 types of requests in the new API 
* {{/solr2//*}} : Operations on specific collections 
* {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to any 
collections. 
* {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
the counter part of the core admin API

This will be released as part of a major release. Check the link given below 
for the full specification.  Your comments are welcome
[Solr API version 2 Specification | http://bit.ly/1JYsBMQ]




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6793) NumericRangeQuery.hashCode() produces frequent collisions

2015-09-10 Thread J.B. Langston (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.B. Langston updated LUCENE-6793:
--
Description: 
We have a user who is developing a Solr plugin and needs to store 
NumericRangeQuery objects in a hash table.  They found that 
NumericRangeQuery.hashCode() produces extremely frequent collisions.  I 
understand that the contract for hashCode doesn't (and can't) guarantee unique 
hash codes for every value, but the distribution of this method seems 
particularly bad with an affinity for the hash value 897548010. Out of a set of 
31 ranges, hashCode returned 897548010 14 times. This is going to result in 
very inefficient distribution of the objects in the hash table. The standard 
"times 31" hash function recommended by Effective Java fares quite a bit 
better, although it still produces quite a few collisions.  Here's a test case 
that compares the results of the current hashCode function with the times 31 
method.  An even better method, like Murmur3 might be found here: 
http://floodyberry.com/noncryptohashzoo/

{code}
package com.company;

import org.apache.lucene.search.NumericRangeQuery;

public class Main {

public static int betterHash(NumericRangeQuery query) {

final int prime = 31;
int result = 1;
result = prime * result + Float.floatToIntBits(query.getBoost());
result = prime * result + query.getRewriteMethod().hashCode();
if (query.getField() != null) result = prime * result + 
query.getField().hashCode();

result = result * prime + query.getPrecisionStep();
if (query.getMin() != null) result = result * prime + 
query.getMin().hashCode();
if (query.getMax() != null) result = result * prime + 
query.getMax().hashCode();
result = result * prime + 
Boolean.valueOf(query.includesMin()).hashCode();
result = result * prime + 
Boolean.valueOf(query.includesMax()).hashCode();
return result;
}

public static void main(String[] args) {
long previous = Long.MIN_VALUE;
long[] list = {
-9223372036854775798L,
-8608480567731124078L,
-7993589098607472357L,
-7378697629483820637L,
-6763806160360168916L,
-6148914691236517196L,
-5534023222112865475L,
-4919131752989213755L,
-4304240283865562034L,
-3689348814741910314L,
-3074457345618258593L,
-2459565876494606873L,
-1844674407370955152L,
-1229782938247303432L,
-614891469123651711L,
10L,
614891469123651730L,
1229782938247303451L,
1844674407370955171L,
2459565876494606892L,
3074457345618258612L,
3689348814741910333L,
4304240283865562053L,
4919131752989213774L,
5534023222112865494L,
6148914691236517215L,
6763806160360168935L,
7378697629483820656L,
7993589098607472376L,
8608480567731124097L,
Long.MAX_VALUE
};

for (long current : list) {
NumericRangeQuery query =  
NumericRangeQuery.newLongRange("_token_long", 8, previous, current, true, true);
System.out.println("[" + previous + " TO " + current + "]: " + 
query.hashCode() + " / " + betterHash(query));
previous = current + 1;
}
}
}
{code}

  was:
We have a user who is developing a Solr plugin and needs to store 
NumericRangeQuery objects in a hash table.  They found that 
NumericRangeQuery.hashCode() produces extremely frequent collisions.  I 
understand that the contract for hashCode doesn't (and can't) guarantee unique 
hash codes for every value, but the distribution of this method seems 
particularly bad with an affinity for the hash value 897548010. Out of a set of 
31 ranges, hashCode returned 897548010 14 times. This is going to result in 
very inefficient distribution of the objects in the hash table. The standard 
"times 31" hash function recommended by Effective Java fares quite a bit 
better, although it still produces quite a few collisions.  Here's a test case 
that compares the results of the current hashCode function with the times 31 
method.  An even better method, like Murmur3 might be found here: 
http://floodyberry.com/noncryptohashzoo/

{code}
package com.company;

import org.apache.lucene.search.NumericRangeQuery;

public class Main {

public static int betterHash(NumericRangeQuery query) {
final int prime = 31;
int result = 1;
result = prime * result + Float.floatToIntBits(query.getBoost());
result = prime * result + query.getRewriteMethod().hashCode();
if (query.getField() 

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60) - Build # 14177 - Failure!

2015-09-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14177/
Java: 64bit/jdk1.8.0_60 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
Captured an uncaught exception in thread: Thread[id=779, 
name=RecoveryThread-source_collection_shard1_replica1, state=RUNNABLE, 
group=TGRP-CdcrReplicationHandlerTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=779, 
name=RecoveryThread-source_collection_shard1_replica1, state=RUNNABLE, 
group=TGRP-CdcrReplicationHandlerTest]
Caused by: org.apache.solr.common.cloud.ZooKeeperException: 
at __randomizedtesting.SeedInfo.seed([6062827B2BA3024C]:0)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:234)
Caused by: org.apache.solr.common.SolrException: java.io.FileNotFoundException: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CdcrReplicationHandlerTest_6062827B2BA3024C-001/jetty-002/cores/source_collection_shard1_replica1/data/tlog/tlog.008.1511930382114619392
 (No such file or directory)
at 
org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:244)
at 
org.apache.solr.update.CdcrTransactionLog.incref(CdcrTransactionLog.java:173)
at 
org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1079)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1579)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1610)
at org.apache.solr.core.SolrCore.seedVersionBuckets(SolrCore.java:877)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:526)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:227)
Caused by: java.io.FileNotFoundException: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CdcrReplicationHandlerTest_6062827B2BA3024C-001/jetty-002/cores/source_collection_shard1_replica1/data/tlog/tlog.008.1511930382114619392
 (No such file or directory)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at 
org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:236)
... 7 more




Build Log:
[...truncated 9670 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrReplicationHandlerTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.CdcrReplicationHandlerTest_6062827B2BA3024C-001/init-core-data-001
   [junit4]   2> 29546 INFO  
(SUITE-CdcrReplicationHandlerTest-seed#[6062827B2BA3024C]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false)
   [junit4]   2> 29546 INFO  
(SUITE-CdcrReplicationHandlerTest-seed#[6062827B2BA3024C]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /_/nd
   [junit4]   2> 29550 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[6062827B2BA3024C]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 29550 INFO  (Thread-92) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 29550 INFO  (Thread-92) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 29650 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[6062827B2BA3024C]) [] 
o.a.s.c.ZkTestServer start zk server on port:33024
   [junit4]   2> 29651 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[6062827B2BA3024C]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 29652 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[6062827B2BA3024C]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 29654 INFO  (zkCallback-46-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@39f10c15 
name:ZooKeeperConnection Watcher:127.0.0.1:33024 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 29654 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[6062827B2BA3024C]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 29655 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[6062827B2BA3024C]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 29655 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[6062827B2BA3024C]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 29664 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[6062827B2BA3024C]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 29664 INFO  

[jira] [Commented] (LUCENE-6793) NumericRangeQuery.hashCode() produces frequent collisions

2015-09-10 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738736#comment-14738736
 ] 

Adrien Grand commented on LUCENE-6793:
--

+1 to switching to the "times 31" method

> NumericRangeQuery.hashCode() produces frequent collisions
> -
>
> Key: LUCENE-6793
> URL: https://issues.apache.org/jira/browse/LUCENE-6793
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.6, 5.3
>Reporter: J.B. Langston
>
> We have a user who is developing a Solr plugin and needs to store 
> NumericRangeQuery objects in a hash table.  They found that 
> NumericRangeQuery.hashCode() produces extremely frequent collisions.  I 
> understand that the contract for hashCode doesn't (and can't) guarantee 
> unique hash codes for every value, but the distribution of this method seems 
> particularly bad with an affinity for the hash value 897548010. Out of a set 
> of 31 ranges, hashCode returned 897548010 14 times. This is going to result 
> in very inefficient distribution of the objects in the hash table. The 
> standard "times 31" hash function recommended by Effective Java fares quite a 
> bit better, although it still produces quite a few collisions.  Here's a test 
> case that compares the results of the current hashCode function with the 
> times 31 method.  An even better method, like Murmur3 might be found here: 
> http://floodyberry.com/noncryptohashzoo/
> {code}
> package com.company;
> import org.apache.lucene.search.NumericRangeQuery;
> public class Main {
> public static int betterHash(NumericRangeQuery query) {
> // I can't subclass NumericRangeQuery since it's constructor is 
> private, so I can't call super and
> // had to copy and paste from the hashCode method for both 
> MultiTermQuery and NumericRangeQuery
> // MultiTermQuery.hashCode (copied verbatim)
> final int prime = 31;
> int result = 1;
> result = prime * result + Float.floatToIntBits(query.getBoost());
> result = prime * result + query.getRewriteMethod().hashCode();
> if (query.getField() != null) result = prime * result + 
> query.getField().hashCode();
> // NumericRangeQuery.hashCode (changed XOR with random constant to 
> times 31)
> result = result * prime + query.getPrecisionStep();
> if (query.getMin() != null) result = result * prime + 
> query.getMin().hashCode();
> if (query.getMax() != null) result = result * prime + 
> query.getMax().hashCode();
> result = result * prime + 
> Boolean.valueOf(query.includesMin()).hashCode();
> result = result * prime + 
> Boolean.valueOf(query.includesMax()).hashCode();
> return result;
> }
> public static void main(String[] args) {
> long previous = Long.MIN_VALUE;
> long[] list = {
> -9223372036854775798L,
> -8608480567731124078L,
> -7993589098607472357L,
> -7378697629483820637L,
> -6763806160360168916L,
> -6148914691236517196L,
> -5534023222112865475L,
> -4919131752989213755L,
> -4304240283865562034L,
> -3689348814741910314L,
> -3074457345618258593L,
> -2459565876494606873L,
> -1844674407370955152L,
> -1229782938247303432L,
> -614891469123651711L,
> 10L,
> 614891469123651730L,
> 1229782938247303451L,
> 1844674407370955171L,
> 2459565876494606892L,
> 3074457345618258612L,
> 3689348814741910333L,
> 4304240283865562053L,
> 4919131752989213774L,
> 5534023222112865494L,
> 6148914691236517215L,
> 6763806160360168935L,
> 7378697629483820656L,
> 7993589098607472376L,
> 8608480567731124097L,
> Long.MAX_VALUE
> };
> for (long current : list) {
> NumericRangeQuery query =  
> NumericRangeQuery.newLongRange("_token_long", 8, previous, current, true, 
> true);
> System.out.println("[" + previous + " TO " + current + "]: " + 
> query.hashCode() + " / " + betterHash(query));
> previous = current + 1;
> }
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 953 - Still Failing

2015-09-10 Thread Uwe Schindler
Yes,

I think because the error message is very confusing, maybe RamUsageEstimator 
should catch this exception and then complain with "Class leaks a static 
instance of  with unknown size."
This would make it easier for developers to figure out what's wrong.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Dawid Weiss [mailto:dawid.we...@gmail.com]
> Sent: Thursday, September 10, 2015 12:00 PM
> To: dev@lucene.apache.org
> Subject: Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 953 - Still 
> Failing
> 
> RamUsageEstimator tries to measure something that is doesn't have access
> to, huh?
> 
> java.security.AccessControlException: access denied
> ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")
> at __randomizedtesting.SeedInfo.seed([4146977D8265D175]:0)
> at
> java.security.AccessControlContext.checkPermission(AccessControlContext.j
> ava:372)
> at
> java.security.AccessController.checkPermission(AccessController.java:559)
> at
> java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
> at
> java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1525)
> at java.lang.Class.checkPackageAccess(Class.java:2309)
> at java.lang.Class.checkMemberAccess(Class.java:2289)
> at java.lang.Class.getDeclaredFields(Class.java:1810)
> at
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCache
> Entry(RamUsageEstimator.java:573)
> 
> On Thu, Sep 10, 2015 at 11:49 AM, Apache Jenkins Server
>  wrote:
> > Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/953/
> >
> > 1 tests failed.
> > FAILED:
> > junit.framework.TestSuite.org.apache.lucene.index.IndexSortingTest
> >
> > Error Message:
> > access denied ("java.lang.RuntimePermission"
> > "accessClassInPackage.sun.nio.ch")
> >
> > Stack Trace:
> > java.security.AccessControlException: access denied
> ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")
> > at __randomizedtesting.SeedInfo.seed([4146977D8265D175]:0)
> > at
> java.security.AccessControlContext.checkPermission(AccessControlContext.j
> ava:372)
> > at
> java.security.AccessController.checkPermission(AccessController.java:559)
> > at
> java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
> > at
> java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1525)
> > at java.lang.Class.checkPackageAccess(Class.java:2309)
> > at java.lang.Class.checkMemberAccess(Class.java:2289)
> > at java.lang.Class.getDeclaredFields(Class.java:1810)
> > at
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCache
> Entry(RamUsageEstimator.java:573)
> > at
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSize
> Of(RamUsageEstimator.java:537)
> > at
> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(Ra
> mUsageEstimator.java:385)
> > at
> com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterA
> lways(StaticFieldsInvariantRule.java:108)
> > at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
> ementAdapter.java:43)
> > at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
> ementAdapter.java:36)
> > at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
> ementAdapter.java:36)
> > at
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAss
> ertionsRequired.java:54)
> > at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure
> .java:48)
> > at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRule
> IgnoreAfterMaxFailures.java:65)
> > at
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnore
> TestSuites.java:55)
> > at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
> ementAdapter.java:36)
> > at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.
> run(ThreadLeakControl.java:365)
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> >
> >
> > Build Log:
> > [...truncated 8052 lines...]
> >[junit4] Suite: org.apache.lucene.index.IndexSortingTest
> >[junit4]   2> NOTE: test params are: codec=Asserting(Lucene53):
> {term_vectors=PostingsFormat(name=LuceneVarGapDocFreqInterval),
> id=PostingsFormat(name=MockRandom), positions=FSTOrd50,
> docs=PostingsFormat(name=MockRandom),
> norm=PostingsFormat(name=Memory doPackFST= true)},
> docValues:{sorted_set=DocValuesFormat(name=Memory),
> numeric=DocValuesFormat(name=Lucene50),
> binary=DocValuesFormat(name=Lucene50),
> sorted_numeric=DocValuesFormat(name=Asserting),
> sorted=DocValuesFormat(name=Lucene50)}, sim=DefaultSimilarity,

[jira] [Updated] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7986:
-
Description: 
This ticket is to create a JDBC Driver (thin client) for the new SQL interface 
(SOLR-7560). As part of this ticket a driver will be added to the Solrj libary 
under the package: *org.apache.solr.client.solrj.io.sql*

Initial implementation will include basic *Driver*, *Connection*, *Statement* 
and *ResultSet* implementations.

Future releases can build on this implementation to support a wide range of 
JDBC clients and tools.


 

  was:
This ticket is to create a JDBC Driver (thin client) for the new SQL interface 
(SOLR-7560). As part of this ticket a driver will be added to the Solrj libary 
under the package: *org.apache.solr.client.solrj.io.jdbc*

Initial implementation will include basic *Driver*, *Connection*, *Statement* 
and *ResultSet* implementations.

Future releases can build on this implementation to support a wide range of 
JDBC clients and tools.


 


> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7986:
-
Attachment: SOLR-7986.patch

Patch with initial classes and skeleton methods.

Very few of these methods will actually be implemented is the ticket.

> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.jdbc*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8029:
-
Labels: API EaseOfUse  (was: )

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738831#comment-14738831
 ] 

Noble Paul edited comment on SOLR-8029 at 9/10/15 2:37 PM:
---

bq. if we use "/solr2" it might look like we are quickly fixing a major oops 
with a temporary URL path that will disappear in a future version.

Yes, we are doing a quick fix. Because anything else will also will look like a 
quick fix and {{api}} becomes a reserved name which cannot conflict with  a 
collection name. We should not make the new API look like  a second class 
citizen where I need to append an extra path to access that like 
{{solr/api/_cluster}}

Eventually , when we deprecate the legacy API, we should be able to get rid of 
the prefix altogether. 

At this point let us not discuss the "how" part. Let's define what an ideal 
solution should look like and fix that first. 


was (Author: noble.paul):
bq. if we use "/solr2" it might look like we are quickly fixing a major oops 
with a temporary URL path that will disappear in a future version.

Yes, we are doing a quick fix. Because anything else will also will look like a 
quick fix and {{api}} becomes a reserved name which cannot conflict with  a 
collection name. We should not make the new API look like  a second class 
citizen where I need to append an extra path to access that like 
{{solr/api/_cluster}}

Eventually , when we deprecate the legacy API, we should be able to get rid of 
the prefix altogether. 

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738849#comment-14738849
 ] 

Noble Paul commented on SOLR-8029:
--

bq.The old API could be made into v1, pointing the root /solr to v2 by default, 
with option to configure it to v1 for people needing to support backward 
compatibility with absolutely no impact on their existing client applications.

Changing stuff abruptly will infuriate users. All the existing apps should work 
when they move to new Solr. If we fail to do that we will hamper adoption. We 
should give users a painless migration path. 



> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738882#comment-14738882
 ] 

Noble Paul commented on SOLR-8029:
--

not a bad idea. 

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8031) /bin/solr on Solaris and clones (two sub issues)

2015-09-10 Thread Uwe Reh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Reh updated SOLR-8031:
--
Description: 
1.) The default implementation fo 'ps' in Solaris can't handle "ps auxww".
Fortunatly you can call "/use/ucb/ps auxww" instead. Maybe one can add 
something like ...
> PS=ps
> if [ "$THIS_OS" == "SunOS" ]; then
>   PS=/usr/ucb/ps
> fi
and replace all "ps aux" with "$PS aux" 

2.) Some implementations of 'sleep' support integers only. The function 
'spinner()' is using 0.5 as parameter. A delay of one second does not look that 
nice, but would be more portable.

  was:
1.) The default implementation fo 'ps' in Solaris can't handle "ps auxww".
Fortunatly you can call "/use/ucb/bin/ps auxww" instead. Maybe one can add 
something like ...
> PS=ps
> if [ "${THIS_OS}" == "SunOS" ]; then
>   PS=/usr/ucb/ps
> fi
and replace all "ps aux" with "$PS aux" 

2.) Some implementations of 'sleep' support integers only. The function 
'spinner()' is using 0.5 as parameter. A delay of one second does not look that 
nice, but would be more portable.


> /bin/solr on Solaris and clones (two sub issues)
> 
>
> Key: SOLR-8031
> URL: https://issues.apache.org/jira/browse/SOLR-8031
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.2.1, 5.3
> Environment: Solaris 5.10, OmniOs
>Reporter: Uwe Reh
>Priority: Minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> 1.) The default implementation fo 'ps' in Solaris can't handle "ps auxww".
> Fortunatly you can call "/use/ucb/ps auxww" instead. Maybe one can add 
> something like ...
> > PS=ps
> > if [ "$THIS_OS" == "SunOS" ]; then
> >   PS=/usr/ucb/ps
> > fi
> and replace all "ps aux" with "$PS aux" 
> 2.) Some implementations of 'sleep' support integers only. The function 
> 'spinner()' is using 0.5 as parameter. A delay of one second does not look 
> that nice, but would be more portable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738833#comment-14738833
 ] 

Noble Paul commented on SOLR-8029:
--

bq.I know that we might be starting from scratch with supporting a configurable 
path when we shed the webapp and become a standalone application, so that part 
of my thoughts might be moot.

Did we not already get rid of the concept that "solr is a webapp"

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738852#comment-14738852
 ] 

Upayavira commented on SOLR-8029:
-

I don't think that's what Steve means.

http://HOST:8983/v1/blah redirects to http://HOST:8983/solr/blah
http://HOST:8983/v2/blah does new clever things
http://HOST:8983/solr/blah does what it ever did

Decent, versioned API, and backwards compatibility.

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Steve Molloy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738875#comment-14738875
 ] 

Steve Molloy commented on SOLR-8029:


Yes, make the current version:
/v1/{api}

Make the new version:
/v2/{api}

And have /solr point to a configurable version, probably /v1 by default at 
first:
/solr/collection/select => /v1/collection/select
/v1/collection/select => Same as current /solr/collection/select
/v2/collection/select => New API for collection operations.

This way, existing clients get the existing behaviour. Client that wish to 
migrate progressively can use both /solr pointing to /v1 and /v2 in new calls. 
Completely new clients can either use /v2 or configure Solr so /solr points to 
/v2 and use that, meaning:

/solr/collection/select => /v2/collection/select
/v1/collection/select => current API
/v2/collection/select => new API.

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738960#comment-14738960
 ] 

Shawn Heisey commented on SOLR-8029:


I like my idea better, but /v2 would work.  I think users aren't going to like 
it, and I think the only way you can make it really work is to deploy at the 
root context.  The root context means it will be up to solr to make sure that 
/solr, /v1, and /v2 are all functioning correctly, and I have concerns that we 
will have a release that's less than stable because of it.

I've voiced my concerns and elaborated at length on my own ideas, so I'm done 
for now.  Good luck with implementation, and I look forward to seeing it!

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-10 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738795#comment-14738795
 ] 

Mark Miller commented on SOLR-8030:
---

But aren't the docs in the tlog stored post update chain anyway? 

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-10 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738819#comment-14738819
 ] 

ludovic Boutros commented on SOLR-8030:
---

Seems to be here:

{code:title=TransactionLog.java|borderStyle=solid}
  public long writeDeleteByQuery(DeleteUpdateCommand cmd, int flags) {
LogCodec codec = new LogCodec(resolver);
try {
  checkWriteHeader(codec, null);

  MemOutputStream out = new MemOutputStream(new byte[20 + 
(cmd.query.length())]);
  codec.init(out);
  codec.writeTag(JavaBinCodec.ARR, 3);
  codec.writeInt(UpdateLog.DELETE_BY_QUERY | flags);  // should just take 
one byte
  codec.writeLong(cmd.getVersion());
  codec.writeStr(cmd.query);

  synchronized (this) {
long pos = fos.size();   // if we had flushed, this should be equal to 
channel.position()
out.writeAll(fos);
endRecord(pos);
// fos.flushBuffer();  // flush later
return pos;
  }
  } catch (IOException e) {
throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
  }

  }
{code}

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738818#comment-14738818
 ] 

Upayavira commented on SOLR-8029:
-

If we are going to go this way, and it will require a lot of consensus for us 
to do so, we should not be thinking about implementation issues right now.

I'd ask why Solr2? There never was a solr2. What might make more sense would be 
http://$host:8983/v2/blah as that would allow us to do future iterations on the 
API should we decide to (or even http://$host:8983/solr/v2/blah)


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738862#comment-14738862
 ] 

Noble Paul commented on SOLR-8029:
--

So what you are saying is instead of the the prefix {{/solr2}} use the {{/v2}} 
prefix. 

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738867#comment-14738867
 ] 

Noble Paul commented on SOLR-8029:
--

bq.Agreed, this is why I propose to have the current API URL point to the URL 
with /v1 or /v2 in it. 

 need more clarity

I make the following assumptions in the new design.
* Every path that exists today should work exactly same in 6.0
* Using the new API should not have extra long uri which may make it look like 
a second class citizen 

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-10 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738802#comment-14738802
 ] 

ludovic Boutros commented on SOLR-8030:
---

Not for delete by query for instance, it seems.

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738807#comment-14738807
 ] 

Shawn Heisey commented on SOLR-8029:


The path "/solr2" rubs me the wrong way.  It implies to a user that we didn't 
think it through originally, had to change it, and just tacked on a number.  
We'll be stuck with it forever to avoid future compatibility problems.  Users 
may start to wonder when version 3 will come out and force them to change all 
their software *again*.  Using something like /solr/api, with all the old paths 
working until explicitly disabled in the config or the next major version comes 
out, will look like a well-planned and permanent change to users.  If we use 
"/solr2" it might look like we are quickly fixing a major oops with a temporary 
URL path that will disappear in a future version.

I don't like the idea of deploying the context at the root, but it's not a BAD 
solution if we do it right.  If we do that, the URL path should remain 
configurable, so a user can use /fahrbot if they want to.  One problem with 
this is that suddenly it becomes Solr's responsibility to make sure that path 
works correctly throughout the application.  Jetty has had a very long time to 
work out any bugs with custom context paths ... we would be starting from 
scratch.

I know that we might be starting from scratch with supporting a configurable 
path when we shed the webapp and become a standalone application, so that part 
of my thoughts might be moot.


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-10 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738813#comment-14738813
 ] 

Ishan Chattopadhyaya commented on SOLR-8030:


Interesting.. Is there a test / steps to reproduce?

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6590) Explore different ways to apply boosts

2015-09-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738817#comment-14738817
 ] 

ASF subversion and git services commented on LUCENE-6590:
-

Commit 1702263 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1702263 ]

LUCENE-6590: Make sure the fast-vector-highlighter also handles boosts set via 
Query.setBoost.

> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738831#comment-14738831
 ] 

Noble Paul commented on SOLR-8029:
--

bq. if we use "/solr2" it might look like we are quickly fixing a major oops 
with a temporary URL path that will disappear in a future version.

Yes, we are doing a quick fix. Because anything else will also will look like a 
quick fix and {{api}} becomes a reserved name which cannot conflict with  a 
collection name. We should not make the new API look like  a second class 
citizen where I need to append an extra path to access that like 
{{solr/api/_cluster}}

Eventually , when we deprecate the legacy API, we should be able to get rid of 
the prefix altogether. 

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Steve Molloy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738840#comment-14738840
 ] 

Steve Molloy commented on SOLR-8029:


Having version in URL is pretty common and makes sense to me. The old API could 
be made into v1, pointing the root /solr to v2 by default, with option to 
configure it to v1 for people needing to support backward compatibility with 
absolutely no impact on their existing client applications.

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738893#comment-14738893
 ] 

Shawn Heisey commented on SOLR-8029:


I had not considered the idea of a conflict with a collection or core named 
api.  That is a potential problem.

Forgetting about implementation for the discussion is a good idea, but I think 
the URL path is important even if we don't consider how we're going to do it.  
I think that stepping away from the default /solr prefix, especially if that is 
a temporary change that we then change back in next major version's example, is 
going really irritate users.  I believe that if we are going to change the URL 
path, it should remain under /solr (or whatever the user chose for the 
context), and be a permanent move.

I wonder if you could have the implementation work in such a way that 
/solr/api/select (and friends) would still work for a collection named api, and 
/solr/api/api/select (or however we arrange the lower bits of a new structure) 
would ALSO work.  We could also declare (and document) that if the new APIs are 
enabled, a core named api will no longer be accessible.

/solr/v2 is another idea, but I do not want anyone to get tied to a specific 
version in the base URL, and there is still the possibility that a user has a 
core with a conflicting name.

bq. Did we not already get rid of the concept that "solr is a webapp"

We got rid of the concept from the user perspective, but it is still a crucial 
detail of our implementation.  We have talked about changing that, but it is 
our reality for the moment, and once we get to the implementation, it will have 
to be factored in.

I don't want anyone to think that any of my ideas or criticisms are an 
indication of an automatic -1 vote.  I think the general idea here is VERY 
good, but that the proposed plan could be improved.  If everyone disagrees with 
me, then I will adapt ... and try not to be mean if my concerns become real.

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6785) Consider merging Query.rewrite() into Query.createWeight()

2015-09-10 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738913#comment-14738913
 ] 

Alan Woodward commented on LUCENE-6785:
---

I'm travelling at the moment, will put up a larger patch changing all the 
modules + solr when I get back (including Terry's fix, thank you!).  I still 
have some tests failing around highlighting multiterm queries.

The bits keeping the QueryCache happy are a bit hacky, but I think it's worth 
the pain of that to make the API nicer.  Maybe in another issue we could look 
at using the Weights themselves as cache keys, rather than their parent queries?

bq. dropping weights could be problematic since they can be expensive to create 
due to statistics collection

One thought I had was that term statistics could be collected and cached by an 
object that's passed to createWeight().  That way we only collect stats for 
each term once per top-level query.  This would also be a nicer solution than 
the searcher term cache I proposed in LUCENE-6561.

> Consider merging Query.rewrite() into Query.createWeight()
> --
>
> Key: LUCENE-6785
> URL: https://issues.apache.org/jira/browse/LUCENE-6785
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
> Attachments: LUCENE-6785.patch
>
>
> Prompted by the discussion on LUCENE-6590.
> Query.rewrite() is a bit of an oddity.  You call it to create a query for a 
> specific IndexSearcher, and to ensure that you get a query implementation 
> that has a working createWeight() method.  However, Weight itself already 
> encapsulates the notion of a per-searcher query.
> You also need to repeatedly call rewrite() until the query has stopped 
> rewriting itself, which is a bit trappy - there are a few places (in 
> highlighting code for example) that just call rewrite() once, rather than 
> looping round as IndexSearcher.rewrite() does.  Most queries don't need to be 
> called multiple times, however, so this seems a bit redundant.  And the ones 
> that do currently return un-rewritten queries can be changed simply enough to 
> rewrite them.
> Finally, in pretty much every case I can find in the codebase, rewrite() is 
> called purely as a prelude to createWeight().  This means, in the case of for 
> example large BooleanQueries, we end up cloning the whole query structure, 
> only to throw it away immediately.
> I'd like to try removing rewrite() entirely, and merging the logic into 
> createWeight(), simplifying the API and removing the trap where code only 
> calls rewrite once.  What do people think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8031) /bin/solr on Solaris and clones (two sub issues)

2015-09-10 Thread Uwe Reh (JIRA)
Uwe Reh created SOLR-8031:
-

 Summary: /bin/solr on Solaris and clones (two sub issues)
 Key: SOLR-8031
 URL: https://issues.apache.org/jira/browse/SOLR-8031
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 5.3, 5.2.1
 Environment: Solaris 5.10, OmniOs
Reporter: Uwe Reh
Priority: Minor


1.) The default implementation fo 'ps' in Solaris can't handle "ps auxww".
Fortunatly you can call "/use/ucb/bin/ps auxww" instead. Maybe one can add 
something like ...
> PS=ps
> if [ "${THIS_OS}" == "SunOS" ]; then
>   PS=/usr/ucb/ps
> fi
and replace all "ps aux" with "$PS aux" 

2.) Some implementations of 'sleep' support integers only. The function 
'spinner()' is using 0.5 as parameter. A delay of one second does not look that 
nice, but would be more portable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738907#comment-14738907
 ] 

Noble Paul commented on SOLR-8029:
--

bq.solr/v2 is another idea, but I do not want anyone to get tied to a specific 
version in the base URL, and there is still the possibility that a user has a 
core with a conflicting name.

[~smolloy] suggests that we have only {{/v2}} or {{/v1}} prefix instead of the 
{{/solr}} prefix. However {{/solr}} prefix would work as if it is equivalent to 
{{/v1}}. Having {{/v1}} or {{/v2}} prefix is extremely common among API 
designers now. I give a +1 for [~smolloy] 's suggestion . We don't need to tell 
the user that he is using solr by using it in every API call

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-10 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738859#comment-14738859
 ] 

ludovic Boutros commented on SOLR-8030:
---

[~ichattopadhyaya],

you could create an update processor which forbid deleteByQuery updates.
Then put it in the default update chain.
You can create another update chain without this processor.
Add some documents and delete them with queries with the update chain allowing 
this operation.
Next, play with the famous Monkey ;)

Perhaps, are there easier ways to reproduce ?

I can try do reproduce this, I like the Monkey :p.



> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Steve Molloy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738857#comment-14738857
 ] 

Steve Molloy commented on SOLR-8029:


bq. Changing stuff abruptly will infuriate users. All the existing apps should 
work when they move to new Solr. If we fail to do that we will hamper adoption. 
We should give users a painless migration path.

Agreed, this is why I propose to have the current API URL point to the URL with 
/v1 or /v2 in it. Making the choice of default version configurable would allow 
people to use the API they want as they were using it in previous version, then 
start migrating slowly, at their own pace, to the new version by using /v2 URl 
in client code using new API. Once everything is updated, they could change 
default version configured and not have to change their client code. With this 
approach, the same would apply if in some years we decide to have a v3 API for 
whatever reason.

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738826#comment-14738826
 ] 

Ishan Chattopadhyaya commented on SOLR-8029:


If we are going to have this released for 6.0, can we not use /solr context for 
the new API, but something like /solr-old (or similar) for backcompat reasons?

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738862#comment-14738862
 ] 

Noble Paul edited comment on SOLR-8029 at 9/10/15 2:50 PM:
---

[~upayavira] So what you are saying is instead of the the prefix {{/solr2}} use 
the {{/v2}} prefix. 


was (Author: noble.paul):
So what you are saying is instead of the the prefix {{/solr2}} use the {{/v2}} 
prefix. 

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7986:
-
Attachment: SOLR-7986.patch

New patch that registers the driver properly.

> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, 
> SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7986:
-
Description: 
This ticket is to create a JDBC Driver (thin client) for the new SQL interface 
(SOLR-7560). As part of this ticket a driver will be added to the Solrj libary 
under the package: *org.apache.solr.client.solrj.io.sql*

Initial implementation will include basic *Driver*, *Connection*, *Statement* 
and *ResultSet* implementations.

Future releases can build on this implementation to support a wide range of 
JDBC clients and tools.

*Syntax using parallel Map/Reduce for aggregations*:
{code}
Properties props = new Properties();
props.put("aggregatioMode", "map_reduce");
props.put("numWorkers", "10");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
having sum(b) > 100");
while(rs.next()) {
String a = rs.getString("a");
double sumB = rs.getString("sum(b)");
}
{code} 

*Syntax using JSON facet API for aggregations*:

{code}
Properties props = new Properties();
props.put("aggregationMode", "facet");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
having sum(b) > 100");
while(rs.next()) {
String a = rs.getString("a");
double sumB = rs.getString("sum(b)");
}
{code}


 

  was:
This ticket is to create a JDBC Driver (thin client) for the new SQL interface 
(SOLR-7560). As part of this ticket a driver will be added to the Solrj libary 
under the package: *org.apache.solr.client.solrj.io.sql*

Initial implementation will include basic *Driver*, *Connection*, *Statement* 
and *ResultSet* implementations.

Future releases can build on this implementation to support a wide range of 
JDBC clients and tools.

*Syntax using parallel Map/Reduce for aggregations*:
{code}
Properties props = new Properties();
props.put("aggregatioMode", "map_reduce");
props.put("numWorkers", "10");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select ");
while(rs.next()) {

}
{code} 

*Syntax using JSON facet API for aggregations*:

{code}
Properties props = new Properties();
props.put("aggregationMode", "facet");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select ");
while(rs.next()) {

}

{code}


 


> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, 
> SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
> *Syntax using parallel Map/Reduce for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregatioMode", "map_reduce");
> props.put("numWorkers", "10");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
> having sum(b) > 100");
> while(rs.next()) {
> String a = rs.getString("a");
> double sumB = rs.getString("sum(b)");
> }
> {code} 
> *Syntax using JSON facet API for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregationMode", "facet");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
> having sum(b) > 

[jira] [Updated] (LUCENE-6796) Some terms incorrectly highlighted in complex SpanQuery

2015-09-10 Thread Tim Allison (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Allison updated LUCENE-6796:

Attachment: LUCENE-6796-testcase.patch

Test case showing the issue.

> Some terms incorrectly highlighted in complex SpanQuery
> ---
>
> Key: LUCENE-6796
> URL: https://issues.apache.org/jira/browse/LUCENE-6796
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.3
>Reporter: Tim Allison
>Priority: Trivial
> Attachments: LUCENE-6796-testcase.patch
>
>
> [~modassar] initially raised this on LUCENE-5205.  I'm opening this as a 
> separate issue.
> If a SpanNear is within a SpanOr, it looks like the child terms within the 
> SpanNear query are getting highlighted even if there is no match on that 
> SpanNear query in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6780) GeoPointDistanceQuery doesn't work with a large radius?

2015-09-10 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6780:
---
Attachment: LUCENE-6780.patch

There's a small change to GeoPointTermsEnum in LUCENE-6777 that this depends 
upon. 

Apply LUCENE-6777.patch then LUCENE-6780.patch

This patch also fixes an issue noted in LUCENE-6698 (a side effect of 
simplifying the cellCrossesCircle logic)


> GeoPointDistanceQuery doesn't work with a large radius?
> ---
>
> Key: LUCENE-6780
> URL: https://issues.apache.org/jira/browse/LUCENE-6780
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Attachments: LUCENE-6780.patch, LUCENE-6780.patch
>
>
> I'm working on LUCENE-6698 but struggling with test failures ...
> Then I noticed that TestGeoPointQuery's test never tests on large distances, 
> so I modified the test to sometimes do so (like TestBKDTree) and hit test 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739192#comment-14739192
 ] 

Uwe Schindler commented on SOLR-7986:
-

See http://docs.oracle.com/javase/7/docs/api/java/sql/DriverManager.html 
introduction.

> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, 
> SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
> *Syntax using parallel Map/Reduce for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregatioMode", "map_reduce");
> props.put("numWorkers", "10");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select ");
> while(rs.next()) {
> }
> {code} 
> *Syntax using JSON facet API for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregationMode", "facet");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select ");
> while(rs.next()) {
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7833) Add new Solr book 'Solr Cookbook - Third Edition' to selection of Solr books and news.

2015-09-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rafał Kuć updated SOLR-7833:

Attachment: SOLR-7833_version_2.patch

Attaching a second version of the patch with shortened link.

> Add new Solr book 'Solr Cookbook - Third Edition' to selection of Solr books 
> and news.
> --
>
> Key: SOLR-7833
> URL: https://issues.apache.org/jira/browse/SOLR-7833
> Project: Solr
>  Issue Type: Task
>Reporter: Zico Fernandes
> Attachments: SOLR-7833.patch, SOLR-7833_version_2.patch, Solr 
> Cookbook_Third Edition.jpg, book_solr_cookbook_3ed.jpg
>
>
> Rafał Kuć is proud to finally announce the book Solr Cookbook - Third Edition 
> by Packt Publishing. This edition will specifically appeal to developers who 
> wish to quickly get to grips with the changes and new features of Apache Solr 
> 5. 
> Solr Cookbook - Third Edition has over 100 easy to follow recipes to solve 
> real-time problems related to Apache Solr 4.x and 5.0 effectively. Starting 
> with vital information on setting up Solr, the developer will quickly 
> progress to analyzing their text data through querying and performance 
> improvement. Finally, they will explore real-life situations, where Solr can 
> be used to simplify daily collection handling.
> With numerous practical chapters centered on important Solr techniques and 
> methods Solr Cookbook - Third Edition will guide intermediate Solr Developers 
> who are willing to learn and implement Pro-level practices, techniques, and 
> solutions.
> Click here to read more about the Solr Cookbook - Third Edition: 
> http://bit.ly/1Q2AGS8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739065#comment-14739065
 ] 

Upayavira commented on SOLR-8029:
-

I note that there's lot more to this proposal than just the URL - we've missed 
a heap of content in your proposal document. Can we break it out into JIRAs so 
we can explore each part?

e.g. you suggest a GET to /solr2//query would execute a search. I'd 
suggest that the 'query' is unneeded. The point is that, from a REST point of 
view, the  is the resource we are interacting with, not a 'query'. 
I'd love to see a venue for discussing these details in, well, detail.

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7986:
-
Attachment: SOLR-7986.patch

New patch handles client caching properly.

> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.3 - Build # 17 - Still Failing

2015-09-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.3/17/

No tests ran.

Build Log:
[...truncated 52788 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (13.0 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.3.1-src.tgz...
   [smoker] 28.5 MB in 0.04 sec (802.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.1.tgz...
   [smoker] 65.6 MB in 0.09 sec (742.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.1.zip...
   [smoker] 75.9 MB in 0.10 sec (745.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.3.1.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.1.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.1-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   5.3.0
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1449, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1394, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1432, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, svnRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 583, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
svnRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 762, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1387, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 

[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739160#comment-14739160
 ] 

Noble Paul commented on SOLR-8029:
--

bq. we've missed a heap of content in your proposal document. Can we break it 
out into JIRAs so we can explore each part?

I'll eventually create more sub tasks for implementing specific things. But, if 
I do it now you would not get the complete picture as the doc provides.

bq.I'd suggest that the 'query' is unneeded.

I beg to differ. it will be rather awkward to make a request like 
{{/v2/gettingstarted?q=*:*}} . the {{}} means a lot of things. not 
just the contents of the index

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2015-09-10 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739159#comment-14739159
 ] 

Tim Allison commented on LUCENE-5205:
-

I just opened LUCENE-6796 for this.  Thank you for raising it!

> SpanQueryParser with recursion, analysis and syntax very similar to classic 
> QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Attachments: LUCENE-5205-cleanup-tests.patch, 
> LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
> LUCENE-5205_dateTestReInitPkgPrvt.patch, 
> LUCENE-5205_improve_stop_word_handling.patch, 
> LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> * grouping "or" clause: (jakarta apache)
> * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
> * multiple fields: title:lucene author:hatcher
>  
> Main additions in SpanQueryParser syntax vs. classic syntax:
> * Can require "in order" for phrases with slop with the \~> operator: 
> "jakarta apache"\~>3
> * Can specify "not near": "fever bieber"!\~3,10 ::
> find "fever" but not if "bieber" appears within 3 words before or 10 
> words after it.
> * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
> apache\]~3 lucene\]\~>4 :: 
> find "jakarta" within 3 words of "apache", and that hit has to be within 
> four words before "lucene"
> * Can also use \[\] for single level phrasal queries instead of " as in: 
> \[jakarta apache\]
> * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 
> :: find "apache" and then either "lucene" or "solr" within three words.
> * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2
> * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
> /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two 
> words of "ap*che" and that hit has to be within ten words of something like 
> "solr" or that "lucene" regex.
> * Can require at least x number of hits at boolean level: "apache AND (lucene 
> solr tika)~2
> * Can use negative only query: -jakarta :: Find all docs that don't contain 
> "jakarta"
> * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of 
> potential performance issues!).
> Trivial additions:
> * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
> prefix =2)
> * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
> <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein)
> This parser can be very useful for concordance tasks (see also LUCENE-5317 
> and LUCENE-5318) and for analytical search.  
> Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
> Most of the documentation is in the javadoc for SpanQueryParser.
> Any and all feedback is welcome.  Thank you.
> Until this is added to the Lucene project, I've added a standalone 
> lucene-addons repo (with jars compiled for the latest stable build of Lucene) 
>  on [github|https://github.com/tballison/lucene-addons].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7986:
-
Description: 
This ticket is to create a JDBC Driver (thin client) for the new SQL interface 
(SOLR-7560). As part of this ticket a driver will be added to the Solrj libary 
under the package: *org.apache.solr.client.solrj.io.sql*

Initial implementation will include basic *Driver*, *Connection*, *Statement* 
and *ResultSet* implementations.

Future releases can build on this implementation to support a wide range of 
JDBC clients and tools.

*Syntax using parallel Map/Reduce for aggregations*:
{code}
Properties props = new Properties();
props.put("aggregatioMode", "map_reduce");
props.put("numWorkers", "10");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select ");
while(rs.next()) {

}
{code} 

*Syntax using JSON facet APU for aggregations*:

{code}
Properties props = new Properties();
props.put("aggregationMode", "facet");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select ");
while(rs.next()) {

}

{code}


 

  was:
This ticket is to create a JDBC Driver (thin client) for the new SQL interface 
(SOLR-7560). As part of this ticket a driver will be added to the Solrj libary 
under the package: *org.apache.solr.client.solrj.io.sql*

Initial implementation will include basic *Driver*, *Connection*, *Statement* 
and *ResultSet* implementations.

Future releases can build on this implementation to support a wide range of 
JDBC clients and tools.


 


> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, 
> SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
> *Syntax using parallel Map/Reduce for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregatioMode", "map_reduce");
> props.put("numWorkers", "10");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select ");
> while(rs.next()) {
> }
> {code} 
> *Syntax using JSON facet APU for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregationMode", "facet");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select ");
> while(rs.next()) {
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b78) - Build # 14178 - Still Failing!

2015-09-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14178/
Java: 64bit/jdk1.9.0-ea-b78 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithKerberosAlt

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 1) Thread[id=1347, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)2) Thread[id=1349, 
name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)3) Thread[id=1346, 
name=apacheds, state=WAITING, group=TGRP-TestSolrCloudWithKerberosAlt] 
at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)4) Thread[id=1350, 
name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)5) Thread[id=1348, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithKerberosAlt: 
   1) Thread[id=1347, name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithKerberosAlt]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2663 - Failure!

2015-09-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2663/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
at 
__randomizedtesting.SeedInfo.seed([96AC41B1EF045A95:1EF87E6B41F8376D]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:350)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryServer(AbstractFullDistribZkTestBase.java:1378)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertPartialResults(CloudExitableDirectoryReaderTest.java:103)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:75)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test(CloudExitableDirectoryReaderTest.java:54)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[jira] [Created] (LUCENE-6796) Some terms incorrectly highlighted in complex SpanQuery

2015-09-10 Thread Tim Allison (JIRA)
Tim Allison created LUCENE-6796:
---

 Summary: Some terms incorrectly highlighted in complex SpanQuery
 Key: LUCENE-6796
 URL: https://issues.apache.org/jira/browse/LUCENE-6796
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Affects Versions: 5.3
Reporter: Tim Allison
Priority: Trivial


[~modassar] initially raised this on LUCENE-5205.  I'm opening this as a 
separate issue.

If a SpanNear is within a SpanOr, it looks like the child terms within the 
SpanNear query are getting highlighted even if there is no match on that 
SpanNear query in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2015-09-10 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739156#comment-14739156
 ] 

Tim Allison commented on LUCENE-5205:
-

Y, I added it in a test case over on LUCENE-6796.  I'd expect: {{b c 
d}}, not {{b c d}}.


> SpanQueryParser with recursion, analysis and syntax very similar to classic 
> QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Attachments: LUCENE-5205-cleanup-tests.patch, 
> LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
> LUCENE-5205_dateTestReInitPkgPrvt.patch, 
> LUCENE-5205_improve_stop_word_handling.patch, 
> LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> * grouping "or" clause: (jakarta apache)
> * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
> * multiple fields: title:lucene author:hatcher
>  
> Main additions in SpanQueryParser syntax vs. classic syntax:
> * Can require "in order" for phrases with slop with the \~> operator: 
> "jakarta apache"\~>3
> * Can specify "not near": "fever bieber"!\~3,10 ::
> find "fever" but not if "bieber" appears within 3 words before or 10 
> words after it.
> * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
> apache\]~3 lucene\]\~>4 :: 
> find "jakarta" within 3 words of "apache", and that hit has to be within 
> four words before "lucene"
> * Can also use \[\] for single level phrasal queries instead of " as in: 
> \[jakarta apache\]
> * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 
> :: find "apache" and then either "lucene" or "solr" within three words.
> * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2
> * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
> /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two 
> words of "ap*che" and that hit has to be within ten words of something like 
> "solr" or that "lucene" regex.
> * Can require at least x number of hits at boolean level: "apache AND (lucene 
> solr tika)~2
> * Can use negative only query: -jakarta :: Find all docs that don't contain 
> "jakarta"
> * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of 
> potential performance issues!).
> Trivial additions:
> * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
> prefix =2)
> * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
> <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein)
> This parser can be very useful for concordance tasks (see also LUCENE-5317 
> and LUCENE-5318) and for analytical search.  
> Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
> Most of the documentation is in the javadoc for SpanQueryParser.
> Any and all feedback is welcome.  Thank you.
> Until this is added to the Lucene project, I've added a standalone 
> lucene-addons repo (with jars compiled for the latest stable build of Lucene) 
>  on [github|https://github.com/tballison/lucene-addons].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5503) Trivial fixes to WeightedSpanTermExtractor

2015-09-10 Thread Tim Allison (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Allison updated LUCENE-5503:

Attachment: LUCENE-5503v2.patch

Updated patch that works with current trunk.

> Trivial fixes to WeightedSpanTermExtractor
> --
>
> Key: LUCENE-5503
> URL: https://issues.apache.org/jira/browse/LUCENE-5503
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 4.7
>Reporter: Tim Allison
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-5503.patch, LUCENE-5503v2.patch
>
>
> The conversion of PhraseQuery to SpanNearQuery miscalculates the slop if 
> there are stop words in some cases.  The issue only really appears if there 
> is more than one intervening run of stop words: ab the cd the the ef.
> I also noticed that the inOrder determination is based on the newly 
> calculated slop, and it should probably be based on the original 
> phraseQuery.getSlop()
> patch and unit tests on way



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739189#comment-14739189
 ] 

Uwe Schindler commented on SOLR-7986:
-

For a full compliant JDBC driver you should also add a META-INF resource, so it 
can be loaded by DriverManager without doing the forName() on the base class. 
By that you can call the driver without any initialization on the Solr-specific 
driver.

> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, 
> SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
> *Syntax using parallel Map/Reduce for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregatioMode", "map_reduce");
> props.put("numWorkers", "10");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select ");
> while(rs.next()) {
> }
> {code} 
> *Syntax using JSON facet API for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregationMode", "facet");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select ");
> while(rs.next()) {
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739160#comment-14739160
 ] 

Noble Paul edited comment on SOLR-8029 at 9/10/15 5:14 PM:
---

bq. we've missed a heap of content in your proposal document. Can we break it 
out into JIRAs so we can explore each part?

I'll eventually create more sub tasks for implementing specific things. But, if 
I do it now you would not get the complete picture as the doc provides.

bq.I'd suggest that the 'query' is unneeded.

I beg to differ. it will be rather awkward to make a request like 
{{/v2/gettingstarted?q=fieldname:val}} . the {{}} means a lot of 
things. not just the contents of the index


was (Author: noble.paul):
bq. we've missed a heap of content in your proposal document. Can we break it 
out into JIRAs so we can explore each part?

I'll eventually create more sub tasks for implementing specific things. But, if 
I do it now you would not get the complete picture as the doc provides.

bq.I'd suggest that the 'query' is unneeded.

I beg to differ. it will be rather awkward to make a request like 
{{/v2/gettingstarted?q=*:*}} . the {{}} means a lot of things. not 
just the contents of the index

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2015-09-10 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739167#comment-14739167
 ] 

Tim Allison commented on LUCENE-5205:
-

Given that the momentum has disappeared for this parser, should I resolve this 
as "won't fix" and leave a pointer to github or should I leave the issue open?

> SpanQueryParser with recursion, analysis and syntax very similar to classic 
> QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Attachments: LUCENE-5205-cleanup-tests.patch, 
> LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
> LUCENE-5205_dateTestReInitPkgPrvt.patch, 
> LUCENE-5205_improve_stop_word_handling.patch, 
> LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> * grouping "or" clause: (jakarta apache)
> * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
> * multiple fields: title:lucene author:hatcher
>  
> Main additions in SpanQueryParser syntax vs. classic syntax:
> * Can require "in order" for phrases with slop with the \~> operator: 
> "jakarta apache"\~>3
> * Can specify "not near": "fever bieber"!\~3,10 ::
> find "fever" but not if "bieber" appears within 3 words before or 10 
> words after it.
> * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
> apache\]~3 lucene\]\~>4 :: 
> find "jakarta" within 3 words of "apache", and that hit has to be within 
> four words before "lucene"
> * Can also use \[\] for single level phrasal queries instead of " as in: 
> \[jakarta apache\]
> * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 
> :: find "apache" and then either "lucene" or "solr" within three words.
> * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2
> * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
> /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two 
> words of "ap*che" and that hit has to be within ten words of something like 
> "solr" or that "lucene" regex.
> * Can require at least x number of hits at boolean level: "apache AND (lucene 
> solr tika)~2
> * Can use negative only query: -jakarta :: Find all docs that don't contain 
> "jakarta"
> * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of 
> potential performance issues!).
> Trivial additions:
> * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
> prefix =2)
> * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
> <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein)
> This parser can be very useful for concordance tasks (see also LUCENE-5317 
> and LUCENE-5318) and for analytical search.  
> Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
> Most of the documentation is in the javadoc for SpanQueryParser.
> Any and all feedback is welcome.  Thank you.
> Until this is added to the Lucene project, I've added a standalone 
> lucene-addons repo (with jars compiled for the latest stable build of Lucene) 
>  on [github|https://github.com/tballison/lucene-addons].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7986:
-
Description: 
This ticket is to create a JDBC Driver (thin client) for the new SQL interface 
(SOLR-7560). As part of this ticket a driver will be added to the Solrj libary 
under the package: *org.apache.solr.client.solrj.io.sql*

Initial implementation will include basic *Driver*, *Connection*, *Statement* 
and *ResultSet* implementations.

Future releases can build on this implementation to support a wide range of 
JDBC clients and tools.

*Syntax using parallel Map/Reduce for aggregations*:
{code}
Properties props = new Properties();
props.put("aggregatioMode", "map_reduce");
props.put("numWorkers", "10");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select ");
while(rs.next()) {

}
{code} 

*Syntax using JSON facet API for aggregations*:

{code}
Properties props = new Properties();
props.put("aggregationMode", "facet");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select ");
while(rs.next()) {

}

{code}


 

  was:
This ticket is to create a JDBC Driver (thin client) for the new SQL interface 
(SOLR-7560). As part of this ticket a driver will be added to the Solrj libary 
under the package: *org.apache.solr.client.solrj.io.sql*

Initial implementation will include basic *Driver*, *Connection*, *Statement* 
and *ResultSet* implementations.

Future releases can build on this implementation to support a wide range of 
JDBC clients and tools.

*Syntax using parallel Map/Reduce for aggregations*:
{code}
Properties props = new Properties();
props.put("aggregatioMode", "map_reduce");
props.put("numWorkers", "10");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select ");
while(rs.next()) {

}
{code} 

*Syntax using JSON facet APU for aggregations*:

{code}
Properties props = new Properties();
props.put("aggregationMode", "facet");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select ");
while(rs.next()) {

}

{code}


 


> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, 
> SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
> *Syntax using parallel Map/Reduce for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregatioMode", "map_reduce");
> props.put("numWorkers", "10");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select ");
> while(rs.next()) {
> }
> {code} 
> *Syntax using JSON facet API for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregationMode", "facet");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select ");
> while(rs.next()) {
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7986:
-
Attachment: SOLR-7986.patch

First basic implementation. No tests, so I have no idea if this works.

The switch between Map/Reduce and JSON facet API for aggregation would get set 
when the Connection is created and can't be changed for a connection currently.

This is not as flexible as I would like, but for tools like Tableau that's how 
it will have to work.

For more programmable clients we can add a switch to the ConnectionImpl, to 
switch back and forth.

> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch, SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6777) Switch GeoPointTermsEnum range list to use a reusable BytesRef

2015-09-10 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6777:
---
Attachment: LUCENE-6777.patch

Updated patch to address comments.

* Removes duplicate longToPrefixCodedBytes
* Refactors variable naming
* Uses reusable BytesRefBuilder

> Switch GeoPointTermsEnum range list to use a reusable BytesRef 
> ---
>
> Key: LUCENE-6777
> URL: https://issues.apache.org/jira/browse/LUCENE-6777
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
> Attachments: LUCENE-6777.patch, LUCENE-6777.patch, LUCENE-6777.patch, 
> LUCENE-6777.patch
>
>
> GeoPointTermsEnum currently constructs a BytesRef for every computed range, 
> then sorts on this BytesRef.  This adds an unnecessary memory overhead since 
> the TermsEnum only requires BytesRef on calls to nextSeekTerm and accept and 
> the ranges only need to be sorted by their long representation. This issue 
> adds the following two improvements:
> 1. Lazily compute the BytesRef on demand only when its needed
> 2. Add a single, transient BytesRef to GeoPointTermsEnum
> This will further cut back on heap usage when constructing ranges across 
> every segment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6796) Some terms incorrectly highlighted in complex SpanQuery

2015-09-10 Thread Tim Allison (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Allison updated LUCENE-6796:

Description: 
[~modassar] initially raised this on LUCENE-5205.  I'm opening this as a 
separate issue.

If a SpanNear is within a SpanOr, it looks like the child terms within the 
SpanNear query are getting highlighted even if there is no match on that 
SpanNear query...in some special cases.  Specifically, in the format of the 
parser in LUCENE-5205 {{"(b [c z]) d\"~2"}}, which is equivalent to: find "b" 
or the phrase "c z" within two words of "d" either direction

This affects trunk. 

  was:
[~modassar] initially raised this on LUCENE-5205.  I'm opening this as a 
separate issue.

If a SpanNear is within a SpanOr, it looks like the child terms within the 
SpanNear query are getting highlighted even if there is no match on that 
SpanNear query in some cases.


> Some terms incorrectly highlighted in complex SpanQuery
> ---
>
> Key: LUCENE-6796
> URL: https://issues.apache.org/jira/browse/LUCENE-6796
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.3
>Reporter: Tim Allison
>Priority: Trivial
> Attachments: LUCENE-6796-testcase.patch
>
>
> [~modassar] initially raised this on LUCENE-5205.  I'm opening this as a 
> separate issue.
> If a SpanNear is within a SpanOr, it looks like the child terms within the 
> SpanNear query are getting highlighted even if there is no match on that 
> SpanNear query...in some special cases.  Specifically, in the format of the 
> parser in LUCENE-5205 {{"(b [c z]) d\"~2"}}, which is equivalent to: find "b" 
> or the phrase "c z" within two words of "d" either direction
> This affects trunk. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739189#comment-14739189
 ] 

Uwe Schindler edited comment on SOLR-7986 at 9/10/15 5:26 PM:
--

For a full compliant JDBC driver you should also add a META-INF resource, so it 
can be loaded by DriverManager without doing the forName() on the base class. 
By that you can call the driver without any initialization on the Solr-specific 
driver, just by providing the URL to DriverManager.


was (Author: thetaphi):
For a full compliant JDBC driver you should also add a META-INF resource, so it 
can be loaded by DriverManager without doing the forName() on the base class. 
By that you can call the driver without any initialization on the Solr-specific 
driver.

> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, 
> SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
> *Syntax using parallel Map/Reduce for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregatioMode", "map_reduce");
> props.put("numWorkers", "10");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select ");
> while(rs.next()) {
> }
> {code} 
> *Syntax using JSON facet API for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregationMode", "facet");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select ");
> while(rs.next()) {
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8032) unhandled exceptions

2015-09-10 Thread songwanging (JIRA)
songwanging created SOLR-8032:
-

 Summary: unhandled exceptions
 Key: SOLR-8032
 URL: https://issues.apache.org/jira/browse/SOLR-8032
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.1, 5.0
Reporter: songwanging
Priority: Minor


In method close() of class RecoveryStrategy 
(solr\core\src\java\org\apache\solr\cloud\RecoveryStrategy.java)

The catch block catch (NullPointerException e) performs no actions to handle 
its expected exception, which makes itself useless. 

To fix this bug, we should add more code into the catch block to handle this 
exception.

public void close() {
close = true;
try {
  prevSendPreRecoveryHttpUriRequest.abort();
} catch (NullPointerException e) {
  // okay
}
   ...
  }

==
In method startLeaderInitiatedRecoveryOnReplicas() of class ElectionContext 
(\solr\core\src\java\org\apache\solr\cloud\ElectionContext.java)

The catch block catch (NoNodeException e) performs no actions to handle its 
expected exception, which makes itself useless. 

To fix this bug, we should add more code into the catch block to handle this 
exception.

 try {
replicas = zkClient.getChildren(znodePath, null, false);
  } catch (NoNodeException nne) {
// this can be ignored
  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 953 - Still Failing

2015-09-10 Thread Dawid Weiss
I told Robert in a private conversation that I think the RUE rule
should be copied to Lucene and tweaked in here (where the security
manager is present for tests and where there's so much testing against
new JVMs). I'll gladly port the changes back to the RR project, it's
justy for convenience that I think we should have a Lucene copy.

Dawid

On Thu, Sep 10, 2015 at 3:32 PM, Uwe Schindler  wrote:
> Yes,
>
> I think because the error message is very confusing, maybe RamUsageEstimator 
> should catch this exception and then complain with "Class leaks a static 
> instance of  with unknown size."
> This would make it easier for developers to figure out what's wrong.
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
>> -Original Message-
>> From: Dawid Weiss [mailto:dawid.we...@gmail.com]
>> Sent: Thursday, September 10, 2015 12:00 PM
>> To: dev@lucene.apache.org
>> Subject: Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 953 - Still 
>> Failing
>>
>> RamUsageEstimator tries to measure something that is doesn't have access
>> to, huh?
>>
>> java.security.AccessControlException: access denied
>> ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")
>> at __randomizedtesting.SeedInfo.seed([4146977D8265D175]:0)
>> at
>> java.security.AccessControlContext.checkPermission(AccessControlContext.j
>> ava:372)
>> at
>> java.security.AccessController.checkPermission(AccessController.java:559)
>> at
>> java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
>> at
>> java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1525)
>> at java.lang.Class.checkPackageAccess(Class.java:2309)
>> at java.lang.Class.checkMemberAccess(Class.java:2289)
>> at java.lang.Class.getDeclaredFields(Class.java:1810)
>> at
>> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCache
>> Entry(RamUsageEstimator.java:573)
>>
>> On Thu, Sep 10, 2015 at 11:49 AM, Apache Jenkins Server
>>  wrote:
>> > Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/953/
>> >
>> > 1 tests failed.
>> > FAILED:
>> > junit.framework.TestSuite.org.apache.lucene.index.IndexSortingTest
>> >
>> > Error Message:
>> > access denied ("java.lang.RuntimePermission"
>> > "accessClassInPackage.sun.nio.ch")
>> >
>> > Stack Trace:
>> > java.security.AccessControlException: access denied
>> ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")
>> > at __randomizedtesting.SeedInfo.seed([4146977D8265D175]:0)
>> > at
>> java.security.AccessControlContext.checkPermission(AccessControlContext.j
>> ava:372)
>> > at
>> java.security.AccessController.checkPermission(AccessController.java:559)
>> > at
>> java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
>> > at
>> java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1525)
>> > at java.lang.Class.checkPackageAccess(Class.java:2309)
>> > at java.lang.Class.checkMemberAccess(Class.java:2289)
>> > at java.lang.Class.getDeclaredFields(Class.java:1810)
>> > at
>> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCache
>> Entry(RamUsageEstimator.java:573)
>> > at
>> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSize
>> Of(RamUsageEstimator.java:537)
>> > at
>> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(Ra
>> mUsageEstimator.java:385)
>> > at
>> com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterA
>> lways(StaticFieldsInvariantRule.java:108)
>> > at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
>> ementAdapter.java:43)
>> > at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
>> ementAdapter.java:36)
>> > at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
>> ementAdapter.java:36)
>> > at
>> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAss
>> ertionsRequired.java:54)
>> > at
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure
>> .java:48)
>> > at
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRule
>> IgnoreAfterMaxFailures.java:65)
>> > at
>> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnore
>> TestSuites.java:55)
>> > at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
>> ementAdapter.java:36)
>> > at
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.
>> run(ThreadLeakControl.java:365)
>> > at java.lang.Thread.run(Thread.java:745)
>> >
>> >
>> >
>> >
>> > Build Log:
>> > [...truncated 8052 lines...]
>> >[junit4] Suite: 

[jira] [Created] (LUCENE-6797) Geo3d circle construction could benefit from its own factory

2015-09-10 Thread Karl Wright (JIRA)
Karl Wright created LUCENE-6797:
---

 Summary: Geo3d circle construction could benefit from its own 
factory
 Key: LUCENE-6797
 URL: https://issues.apache.org/jira/browse/LUCENE-6797
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Reporter: Karl Wright


GeoCircles need special handling for whole-world situations and for single 
point situations.  It would be better to have a factory that constructed 
appropriate instance types based on the parameters than try to fold everything 
into one class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7986:
-
Attachment: SOLR-7986.patch

New patch with cleaned up caching.

> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, 
> SOLR-7986.patch, SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
> *Syntax using parallel Map/Reduce for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregatioMode", "map_reduce");
> props.put("numWorkers", "10");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
> having sum(b) > 100");
> while(rs.next()) {
> String a = rs.getString("a");
> double sumB = rs.getDouble("sum(b)");
> }
> {code} 
> *Syntax using JSON facet API for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregationMode", "facet");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
> having sum(b) > 100");
> while(rs.next()) {
> String a = rs.getString("a");
> double sumB = rs.getDouble("sum(b)");
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6777) Switch GeoPointTermsEnum range list to use a reusable BytesRef

2015-09-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739294#comment-14739294
 ] 

Michael McCandless commented on LUCENE-6777:


Thanks [~nknize], new patch looks great!  I'll run tests & commit shortly ...

> Switch GeoPointTermsEnum range list to use a reusable BytesRef 
> ---
>
> Key: LUCENE-6777
> URL: https://issues.apache.org/jira/browse/LUCENE-6777
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
> Attachments: LUCENE-6777.patch, LUCENE-6777.patch, LUCENE-6777.patch, 
> LUCENE-6777.patch
>
>
> GeoPointTermsEnum currently constructs a BytesRef for every computed range, 
> then sorts on this BytesRef.  This adds an unnecessary memory overhead since 
> the TermsEnum only requires BytesRef on calls to nextSeekTerm and accept and 
> the ranges only need to be sorted by their long representation. This issue 
> adds the following two improvements:
> 1. Lazily compute the BytesRef on demand only when its needed
> 2. Add a single, transient BytesRef to GeoPointTermsEnum
> This will further cut back on heap usage when constructing ranges across 
> every segment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6777) Switch GeoPointTermsEnum range list to use a reusable BytesRef

2015-09-10 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6777.

   Resolution: Fixed
Fix Version/s: 5.4
   Trunk

Thanks [~nknize]!

> Switch GeoPointTermsEnum range list to use a reusable BytesRef 
> ---
>
> Key: LUCENE-6777
> URL: https://issues.apache.org/jira/browse/LUCENE-6777
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6777.patch, LUCENE-6777.patch, LUCENE-6777.patch, 
> LUCENE-6777.patch
>
>
> GeoPointTermsEnum currently constructs a BytesRef for every computed range, 
> then sorts on this BytesRef.  This adds an unnecessary memory overhead since 
> the TermsEnum only requires BytesRef on calls to nextSeekTerm and accept and 
> the ranges only need to be sorted by their long representation. This issue 
> adds the following two improvements:
> 1. Lazily compute the BytesRef on demand only when its needed
> 2. Add a single, transient BytesRef to GeoPointTermsEnum
> This will further cut back on heap usage when constructing ranges across 
> every segment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6777) Switch GeoPointTermsEnum range list to use a reusable BytesRef

2015-09-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739342#comment-14739342
 ] 

ASF subversion and git services commented on LUCENE-6777:
-

Commit 1702308 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1702308 ]

LUCENE-6777: reuse BytesRef when visiting term ranges in GeoPointTermsEnum

> Switch GeoPointTermsEnum range list to use a reusable BytesRef 
> ---
>
> Key: LUCENE-6777
> URL: https://issues.apache.org/jira/browse/LUCENE-6777
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6777.patch, LUCENE-6777.patch, LUCENE-6777.patch, 
> LUCENE-6777.patch
>
>
> GeoPointTermsEnum currently constructs a BytesRef for every computed range, 
> then sorts on this BytesRef.  This adds an unnecessary memory overhead since 
> the TermsEnum only requires BytesRef on calls to nextSeekTerm and accept and 
> the ranges only need to be sorted by their long representation. This issue 
> adds the following two improvements:
> 1. Lazily compute the BytesRef on demand only when its needed
> 2. Add a single, transient BytesRef to GeoPointTermsEnum
> This will further cut back on heap usage when constructing ranges across 
> every segment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739208#comment-14739208
 ] 

Joel Bernstein commented on SOLR-7986:
--

Thanks, I'll take a look!

> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, 
> SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
> *Syntax using parallel Map/Reduce for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregatioMode", "map_reduce");
> props.put("numWorkers", "10");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
> having sum(b) > 100");
> while(rs.next()) {
> String a = rs.getString("a");
> double sumB = rs.getDouble("sum(b)");
> }
> {code} 
> *Syntax using JSON facet API for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregationMode", "facet");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
> having sum(b) > 100");
> while(rs.next()) {
> String a = rs.getString("a");
> double sumB = rs.getDouble("sum(b)");
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7986:
-
Description: 
This ticket is to create a JDBC Driver (thin client) for the new SQL interface 
(SOLR-7560). As part of this ticket a driver will be added to the Solrj libary 
under the package: *org.apache.solr.client.solrj.io.sql*

Initial implementation will include basic *Driver*, *Connection*, *Statement* 
and *ResultSet* implementations.

Future releases can build on this implementation to support a wide range of 
JDBC clients and tools.

*Syntax using parallel Map/Reduce for aggregations*:
{code}
Properties props = new Properties();
props.put("aggregatioMode", "map_reduce");
props.put("numWorkers", "10");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
having sum(b) > 100");
while(rs.next()) {
String a = rs.getString("a");
double sumB = rs.getDouble("sum(b)");
}
{code} 

*Syntax using JSON facet API for aggregations*:

{code}
Properties props = new Properties();
props.put("aggregationMode", "facet");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
having sum(b) > 100");
while(rs.next()) {
String a = rs.getString("a");
double sumB = rs.getDouble("sum(b)");
}
{code}


 

  was:
This ticket is to create a JDBC Driver (thin client) for the new SQL interface 
(SOLR-7560). As part of this ticket a driver will be added to the Solrj libary 
under the package: *org.apache.solr.client.solrj.io.sql*

Initial implementation will include basic *Driver*, *Connection*, *Statement* 
and *ResultSet* implementations.

Future releases can build on this implementation to support a wide range of 
JDBC clients and tools.

*Syntax using parallel Map/Reduce for aggregations*:
{code}
Properties props = new Properties();
props.put("aggregatioMode", "map_reduce");
props.put("numWorkers", "10");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
having sum(b) > 100");
while(rs.next()) {
String a = rs.getString("a");
double sumB = rs.getString("sum(b)");
}
{code} 

*Syntax using JSON facet API for aggregations*:

{code}
Properties props = new Properties();
props.put("aggregationMode", "facet");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
having sum(b) > 100");
while(rs.next()) {
String a = rs.getString("a");
double sumB = rs.getString("sum(b)");
}
{code}


 


> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, 
> SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
> *Syntax using parallel Map/Reduce for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregatioMode", "map_reduce");
> props.put("numWorkers", "10");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
> having sum(b) > 100");
> while(rs.next()) {
> String a = rs.getString("a");
> double sumB = rs.getDouble("sum(b)");
> }
> {code} 
> *Syntax using JSON facet API for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregationMode", "facet");
> 

[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 39 - Failure!

2015-09-10 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/39/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CustomCollectionTest.test

Error Message:
Error from server at http://127.0.0.1:56198/wh/se: collection already exists: 
implicitcoll1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:56198/wh/se: collection already exists: 
implicitcoll1
at 
__randomizedtesting.SeedInfo.seed([B89BEF63E536158A:30CFD0B94BCA7872]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1574)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1528)
at 
org.apache.solr.cloud.CustomCollectionTest.testCustomCollectionsAPI(CustomCollectionTest.java:152)
at 
org.apache.solr.cloud.CustomCollectionTest.test(CustomCollectionTest.java:95)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 

[jira] [Commented] (SOLR-8027) Reference guide instructions for converting an existing install to SolrCloud

2015-09-10 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739234#comment-14739234
 ] 

Erick Erickson commented on SOLR-8027:
--

Hmmm.. It'd be great to have this documented!

I gave this a quick shot just to see if it'd do what I'd expect and
it's not actually that hard:

0> created the "techproducts" non-solr-cloud collection
1> shut down Solr
2> moved the entire directory "somewhere else", not in the Solr tree
to simulate, say, bringing it over from some other machine.
3> brought up ZK and pushed the configuration file up
4> started SolrCloud (nothing is in it as you'd expect)
5> created a new collection with the config from step <3> (name irrelevant)
6> shut down the cloud
7> Copied just the _contents_ of the index directory from step <0> to
the index directory created in <5>
8> restarted SolrCloud

And all was well.

I also tried just creating a new collection (1 shard) and using
MERGEINDEXES with the indexDir option which also worked. I think I
like that better, there are fewer places to mess things up,
and it doesn't require bouncing SolrCloud. The first time I tried it I
didn't manage to issue the commit, so that should be called out. Also
should call out checking that the doc count is right since if a person
gets nervous and issues the merge N times you have Nx the docs...

You'd want ADDREPLICAs once you were satisfied you'd moved the index
correctly of course. And hope that the config you pushed up was
actually OK. Perhaps something here about just moving the relevant
parts of schema.xml rather than the whole (old) config dir? Or maybe
even proofing things out on 5x first?

Of course, all this assuming you couldn't just re-index fresh ;).


> Reference guide instructions for converting an existing install to SolrCloud
> 
>
> Key: SOLR-8027
> URL: https://issues.apache.org/jira/browse/SOLR-8027
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Shawn Heisey
>
> I have absolutely no idea where to begin with this, but it's a definite hole 
> in our documentation.  I'd like to have some instructions that will help 
> somebody convert a non-cloud install to SolrCloud.  Ideally they would start 
> with a typical directory structure with one or more cores and end with cores 
> named foo_shardN_replicaM.
> As far as I know, Solr doesn't actually let non-cloud cores coexist with 
> cloud cores.  I once tried to create a non-cloud core on a cloud install, and 
> couldn't do it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-8027) Reference guide instructions for converting an existing install to SolrCloud

2015-09-10 Thread Erick Erickson
Varun:

Thanks, somehow I though it was just a random e-mail. Added comment to ticket.

On Thu, Sep 10, 2015 at 12:45 AM, Varun Thacker
 wrote:
> Hi Erick,
>
> Your comment does not reflect on the Jira.
>
> I also updated the MERGEINDEXES documentation (
> https://cwiki.apache.org/confluence/display/solr/Merging+Indexes ) to
> reflect that the WAR is pre-extracted from Solr 5.3 onwards.
>
> On Thu, Sep 10, 2015 at 5:39 AM, Erick Erickson 
> wrote:
>>
>> Hmmm.. It'd be great to have this documented!
>>
>> I gave this a quick shot just to see if it'd do what I'd expect and
>> it's not actually that hard:
>>
>> 0> created the "techproducts" non-solr-cloud collection
>> 1> shut down Solr
>> 2> moved the entire directory "somewhere else", not in the Solr tree
>> to simulate, say, bringing it over from some other machine.
>> 3> brought up ZK and pushed the configuration file up
>> 4> started SolrCloud (nothing is in it as you'd expect)
>> 5> created a new collection with the config from step <3> (name
>> irrelevant)
>> 6> shut down the cloud
>> 7> Copied just the _contents_ of the index directory from step <0> to
>> the index directory created in <5>
>> 8> restarted SolrCloud
>>
>> And all was well.
>>
>> I also tried just creating a new collection (1 shard) and using
>> MERGEINDEXES with the indexDir option which also worked. I think I
>> like that a little better, there are fewer places to mess things up,
>> and it doesn't require bouncing SolrCloud. The first time I tried it I
>> didn't manage to issue the commit, so that should be called out. Also
>> should call out checking that the doc count is right since if a person
>> gets nervous and issues the merge N times you have Nx the docs...
>>
>> You'd want ADDREPLICAs once you were satisfied you'd moved the index
>> correctly of course. And hope that the config you pushed up was
>> actually OK. Perhaps something here about just moving the relevant
>> parts of schema.xml rather than the whole (old) config dir? Or maybe
>> even proofing things out on 5x first?
>>
>> Of course, all this assuming you couldn't just re-index fresh ;).
>>
>> FWIW,
>> Erick
>>
>>
>>
>> On Wed, Sep 9, 2015 at 4:31 PM, Shawn Heisey (JIRA) 
>> wrote:
>> >
>> > [
>> > https://issues.apache.org/jira/browse/SOLR-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14737799#comment-14737799
>> > ]
>> >
>> > Shawn Heisey commented on SOLR-8027:
>> > 
>> >
>> > I can try out some things tonight when I get home, assuming the honeydew
>> > list is not extreme.
>> >
>> >> Reference guide instructions for converting an existing install to
>> >> SolrCloud
>> >>
>> >> 
>> >>
>> >> Key: SOLR-8027
>> >> URL: https://issues.apache.org/jira/browse/SOLR-8027
>> >> Project: Solr
>> >>  Issue Type: Improvement
>> >>  Components: documentation
>> >>Reporter: Shawn Heisey
>> >>
>> >> I have absolutely no idea where to begin with this, but it's a definite
>> >> hole in our documentation.  I'd like to have some instructions that will
>> >> help somebody convert a non-cloud install to SolrCloud.  Ideally they 
>> >> would
>> >> start with a typical directory structure with one or more cores and end 
>> >> with
>> >> cores named foo_shardN_replicaM.
>> >> As far as I know, Solr doesn't actually let non-cloud cores coexist
>> >> with cloud cores.  I once tried to create a non-cloud core on a cloud
>> >> install, and couldn't do it.
>> >
>> >
>> >
>> > --
>> > This message was sent by Atlassian JIRA
>> > (v6.3.4#6332)
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
>
>
> --
>
>
> Regards,
> Varun Thacker

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8033) useless if branch

2015-09-10 Thread songwanging (JIRA)
songwanging created SOLR-8033:
-

 Summary: useless if branch
 Key: SOLR-8033
 URL: https://issues.apache.org/jira/browse/SOLR-8033
 Project: Solr
  Issue Type: Improvement
Affects Versions: 5.1, 5.0
Reporter: songwanging
Priority: Minor


In method HdfsTransactionLog() of class HdfsTransactionLog 
(solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java)

The if branch presented in the following code snippet performs no actions, we 
should add more code to handle this or directly delete this if branch.

HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection 
globalStrings, boolean openExisting) {
  ...
try {
  if (debug) {
//log.debug("New TransactionLog file=" + tlogFile + ", exists=" + 
tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + 
openExisting);
  }
...
}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7986) JDBC Driver for SQL Interface

2015-09-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-7986:
-
Description: 
This ticket is to create a JDBC Driver (thin client) for the new SQL interface 
(SOLR-7560). As part of this ticket a driver will be added to the Solrj libary 
under the package: *org.apache.solr.client.solrj.io.sql*

Initial implementation will include basic *Driver*, *Connection*, *Statement* 
and *ResultSet* implementations.

Future releases can build on this implementation to support a wide range of 
JDBC clients and tools.

*Syntax using parallel Map/Reduce for aggregations*:
{code}
Properties props = new Properties();
props.put("aggregationMode", "map_reduce");
props.put("numWorkers", "10");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
having sum(b) > 100");
while(rs.next()) {
String a = rs.getString("a");
double sumB = rs.getDouble("sum(b)");
}
{code} 

*Syntax using JSON facet API for aggregations*:

{code}
Properties props = new Properties();
props.put("aggregationMode", "facet");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
having sum(b) > 100");
while(rs.next()) {
String a = rs.getString("a");
double sumB = rs.getDouble("sum(b)");
}
{code}


 

  was:
This ticket is to create a JDBC Driver (thin client) for the new SQL interface 
(SOLR-7560). As part of this ticket a driver will be added to the Solrj libary 
under the package: *org.apache.solr.client.solrj.io.sql*

Initial implementation will include basic *Driver*, *Connection*, *Statement* 
and *ResultSet* implementations.

Future releases can build on this implementation to support a wide range of 
JDBC clients and tools.

*Syntax using parallel Map/Reduce for aggregations*:
{code}
Properties props = new Properties();
props.put("aggregatioMode", "map_reduce");
props.put("numWorkers", "10");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
having sum(b) > 100");
while(rs.next()) {
String a = rs.getString("a");
double sumB = rs.getDouble("sum(b)");
}
{code} 

*Syntax using JSON facet API for aggregations*:

{code}
Properties props = new Properties();
props.put("aggregationMode", "facet");
Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
Connection con = 
DriverManager.getConnection("jdbc:solr:?collection=", 
props);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
having sum(b) > 100");
while(rs.next()) {
String a = rs.getString("a");
double sumB = rs.getDouble("sum(b)");
}
{code}


 


> JDBC Driver for SQL Interface
> -
>
> Key: SOLR-7986
> URL: https://issues.apache.org/jira/browse/SOLR-7986
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: Trunk
>Reporter: Joel Bernstein
> Attachments: SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, 
> SOLR-7986.patch, SOLR-7986.patch
>
>
> This ticket is to create a JDBC Driver (thin client) for the new SQL 
> interface (SOLR-7560). As part of this ticket a driver will be added to the 
> Solrj libary under the package: *org.apache.solr.client.solrj.io.sql*
> Initial implementation will include basic *Driver*, *Connection*, *Statement* 
> and *ResultSet* implementations.
> Future releases can build on this implementation to support a wide range of 
> JDBC clients and tools.
> *Syntax using parallel Map/Reduce for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregationMode", "map_reduce");
> props.put("numWorkers", "10");
> Class.forName("org.apache.solr.client.solrj.io.sql.DriverImpl").newInstance();
> Connection con = 
> DriverManager.getConnection("jdbc:solr:?collection=",
>  props);
> Statement stmt = con.createStatement();
> ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a 
> having sum(b) > 100");
> while(rs.next()) {
> String a = rs.getString("a");
> double sumB = rs.getDouble("sum(b)");
> }
> {code} 
> *Syntax using JSON facet API for aggregations*:
> {code}
> Properties props = new Properties();
> props.put("aggregationMode", "facet");
> 

[jira] [Commented] (LUCENE-6777) Switch GeoPointTermsEnum range list to use a reusable BytesRef

2015-09-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739340#comment-14739340
 ] 

ASF subversion and git services commented on LUCENE-6777:
-

Commit 1702307 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1702307 ]

LUCENE-6777: reuse BytesRef when visiting term ranges in GeoPointTermsEnum

> Switch GeoPointTermsEnum range list to use a reusable BytesRef 
> ---
>
> Key: LUCENE-6777
> URL: https://issues.apache.org/jira/browse/LUCENE-6777
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
> Attachments: LUCENE-6777.patch, LUCENE-6777.patch, LUCENE-6777.patch, 
> LUCENE-6777.patch
>
>
> GeoPointTermsEnum currently constructs a BytesRef for every computed range, 
> then sorts on this BytesRef.  This adds an unnecessary memory overhead since 
> the TermsEnum only requires BytesRef on calls to nextSeekTerm and accept and 
> the ranges only need to be sorted by their long representation. This issue 
> adds the following two improvements:
> 1. Lazily compute the BytesRef on demand only when its needed
> 2. Add a single, transient BytesRef to GeoPointTermsEnum
> This will further cut back on heap usage when constructing ranges across 
> every segment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 953 - Still Failing

2015-09-10 Thread Robert Muir
I am also happy to submit improvements to RR for this as well. It
doesnt matter, its just a matter of time. I was blocked from working
on it before by the insanity known as maven
(https://github.com/randomizedtesting/randomizedtesting/issues/199)
but now I can get past the issues.

On Thu, Sep 10, 2015 at 2:53 PM, Dawid Weiss  wrote:
> I told Robert in a private conversation that I think the RUE rule
> should be copied to Lucene and tweaked in here (where the security
> manager is present for tests and where there's so much testing against
> new JVMs). I'll gladly port the changes back to the RR project, it's
> justy for convenience that I think we should have a Lucene copy.
>
> Dawid
>
> On Thu, Sep 10, 2015 at 3:32 PM, Uwe Schindler  wrote:
>> Yes,
>>
>> I think because the error message is very confusing, maybe RamUsageEstimator 
>> should catch this exception and then complain with "Class leaks a static 
>> instance of  with unknown size."
>> This would make it easier for developers to figure out what's wrong.
>>
>> Uwe
>>
>> -
>> Uwe Schindler
>> H.-H.-Meier-Allee 63, D-28213 Bremen
>> http://www.thetaphi.de
>> eMail: u...@thetaphi.de
>>
>>
>>> -Original Message-
>>> From: Dawid Weiss [mailto:dawid.we...@gmail.com]
>>> Sent: Thursday, September 10, 2015 12:00 PM
>>> To: dev@lucene.apache.org
>>> Subject: Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 953 - Still 
>>> Failing
>>>
>>> RamUsageEstimator tries to measure something that is doesn't have access
>>> to, huh?
>>>
>>> java.security.AccessControlException: access denied
>>> ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")
>>> at __randomizedtesting.SeedInfo.seed([4146977D8265D175]:0)
>>> at
>>> java.security.AccessControlContext.checkPermission(AccessControlContext.j
>>> ava:372)
>>> at
>>> java.security.AccessController.checkPermission(AccessController.java:559)
>>> at
>>> java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
>>> at
>>> java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1525)
>>> at java.lang.Class.checkPackageAccess(Class.java:2309)
>>> at java.lang.Class.checkMemberAccess(Class.java:2289)
>>> at java.lang.Class.getDeclaredFields(Class.java:1810)
>>> at
>>> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCache
>>> Entry(RamUsageEstimator.java:573)
>>>
>>> On Thu, Sep 10, 2015 at 11:49 AM, Apache Jenkins Server
>>>  wrote:
>>> > Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/953/
>>> >
>>> > 1 tests failed.
>>> > FAILED:
>>> > junit.framework.TestSuite.org.apache.lucene.index.IndexSortingTest
>>> >
>>> > Error Message:
>>> > access denied ("java.lang.RuntimePermission"
>>> > "accessClassInPackage.sun.nio.ch")
>>> >
>>> > Stack Trace:
>>> > java.security.AccessControlException: access denied
>>> ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")
>>> > at __randomizedtesting.SeedInfo.seed([4146977D8265D175]:0)
>>> > at
>>> java.security.AccessControlContext.checkPermission(AccessControlContext.j
>>> ava:372)
>>> > at
>>> java.security.AccessController.checkPermission(AccessController.java:559)
>>> > at
>>> java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
>>> > at
>>> java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1525)
>>> > at java.lang.Class.checkPackageAccess(Class.java:2309)
>>> > at java.lang.Class.checkMemberAccess(Class.java:2289)
>>> > at java.lang.Class.getDeclaredFields(Class.java:1810)
>>> > at
>>> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCache
>>> Entry(RamUsageEstimator.java:573)
>>> > at
>>> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSize
>>> Of(RamUsageEstimator.java:537)
>>> > at
>>> com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(Ra
>>> mUsageEstimator.java:385)
>>> > at
>>> com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterA
>>> lways(StaticFieldsInvariantRule.java:108)
>>> > at
>>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
>>> ementAdapter.java:43)
>>> > at
>>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
>>> ementAdapter.java:36)
>>> > at
>>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat
>>> ementAdapter.java:36)
>>> > at
>>> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAss
>>> ertionsRequired.java:54)
>>> > at
>>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure
>>> .java:48)
>>> > at
>>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRule
>>> IgnoreAfterMaxFailures.java:65)
>>> > at
>>> 

[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2015-09-10 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739381#comment-14739381
 ] 

Upayavira commented on SOLR-8029:
-

You are suggesting we write a new RESTful API, then suggesting something that 
isn't restful. It doesn't make sense to me. A collection doesn't have a 
property called a 'query'. If you said //index?q=*:*, that might 
make more sense, because we are querying a collection's index, but a query is 
more of an action than a resource.

> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 3 types of requests in the new API 
> * {{/solr2//*}} : Operations on specific collections 
> * {{/solr2/_cluster/*}} : Cluster-wide operations which are not specific to 
> any collections. 
> * {{/solr2/_node/*}} : Operations on the node receiving the request. This is 
> the counter part of the core admin API
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6797) Geo3d circle construction could benefit from its own factory

2015-09-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739644#comment-14739644
 ] 

Michael McCandless commented on LUCENE-6797:


Thanks [~daddywri], I agree this is a rote refactor ... I'll commit shortly.

> Geo3d circle construction could benefit from its own factory
> 
>
> Key: LUCENE-6797
> URL: https://issues.apache.org/jira/browse/LUCENE-6797
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial
>Reporter: Karl Wright
> Attachments: LUCENE-6797.patch
>
>
> GeoCircles need special handling for whole-world situations and for single 
> point situations.  It would be better to have a factory that constructed 
> appropriate instance types based on the parameters than try to fold 
> everything into one class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8035) Move solr/webapp to solr/server/solr-webapp

2015-09-10 Thread Erik Hatcher (JIRA)
Erik Hatcher created SOLR-8035:
--

 Summary: Move solr/webapp to solr/server/solr-webapp
 Key: SOLR-8035
 URL: https://issues.apache.org/jira/browse/SOLR-8035
 Project: Solr
  Issue Type: Bug
  Components: UI
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Fix For: Trunk, 5.4


Let's move solr/webapp *source* files to their final actual distro destination. 
 This facilitates folks editing the UI in real-time (save change, refresh in 
browser) rather than having to "stop, ant server, restart" to see a change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >