[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702902#comment-14702902
 ] 

Jan Høydahl commented on SOLR-5103:
---

The issue description is quite describing, isn't it? We're on-topic here as far 
as I can tell:
{quote}
I think for 5.0, we should make it easier to add plugins by defining a plugin 
package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
that can be easily installed (even from the UI!) and configured 
programmatically.
{quote}
Solr's plugins today are really just Java classes that happen to implement a 
certain interface that we have defined as a plugin, and then it is up to the 
user to get hold of the plugin from somewhere, find a way to place it on the 
classpath, register the full class name into appropriate config file (varies 
depending on what the plugin does), restart Solr and then start using it.

This JIRA is trying to define a broader pluging definition, where the Java 
class is just part of it all, where dependency jars and configuration could be 
packaged with the plugin, where the whole fetch-install-configure complexity is 
hidden and can be done by the click of a GUI button or running of one shell 
command.

Besides - how many Solr plugins do you know of today in the wild? How do 
you find them? Most are just patches in JIRA, others have their own installers, 
yet others include some description of how to copy thing into various places, 
editing some XML files etc. The 3rd party market for cool new features will 
probably take off once we can offer such a simplified plugin architecture. And 
we won't force anyone, you can still plug classes in the manual way if you like.

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702947#comment-14702947
 ] 

Alexandre Rafalovitch commented on SOLR-5103:
-

I think that's a little more to a plugin than upload mechanics. There is a need 
for meta-data convention, dependency handling, versioning, etc. Supplementary 
UI.

So, making upload mechanics could be a good first step, but the bigger picture 
needs to be kept in mind as well.

BTW, Velocity loader apparently can load pages from the classpath. So, it might 
be possible to bundle pages with the plugin somehow already, if the classpath 
is setup right.

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702985#comment-14702985
 ] 

Jan Høydahl commented on SOLR-5103:
---

bq. I think that's a little more to a plugin than upload mechanics. There is a 
need for meta-data convention, dependency handling, versioning, etc. 
Supplementary UI.

Totally agree. But we need to start in one end. Perhaps defining the minimum 
zip/jar layout for what to constitute a plugin. What if we choose our own file 
ending, e.g. {.solrplugin} which is really just a zip file. Then a possible 
layout could be
{noformat}
my-plugin.solrplugin
/solrplugin.properties -- defining e.g. name, version, dependencies etc
/lib/  (jars to be added to classpath)
/config/config-api-commands.json  (a series of commands to be run towards 
config API)
/config/schema-api-commands.json  (a series of commands to be run towards 
config API)
{noformat}

Then, over time, we can evolve the spec and add support for pluggable UI etc.

There are tons of questions to address too
* Size limitation in blob store?
* Should Solr unpack all libs from the {{.solrplugin}} pkg and add them 
individually to blob store, or write a classloader that adds everything 
directly from the zip?
* What about version clash of dependencies - should not jeopardize the rest of 
the system
* Should all plugins be system-level, and then require a new 
{{/collection/config install-plugin}} command to enable it for each 
collection?
* What about system-level plugins, such as Authentication and Authorization 
plugins? Should {{security.json}} be auto updated when installing an auth 
plugin, or only if it does not exist already?
* There should be a way to install plugins without registering the component 
with APIs, e.g. {{bin/solr installplugin solrcell -noregister}}
* Uninstall of a plugin - it should also be able to unregister things from 
config / schema, for all collections where it is enabled (scary)

First step is to see if there is enough committer interest in going down this 
path, then split things up into many smaller tasks that can be handled 
separately.

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5103) Plugin Improvements

2015-08-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702985#comment-14702985
 ] 

Jan Høydahl edited comment on SOLR-5103 at 8/19/15 1:26 PM:


bq. I think that's a little more to a plugin than upload mechanics. There is a 
need for meta-data convention, dependency handling, versioning, etc. 
Supplementary UI.

Totally agree. But we need to start in one end. Perhaps defining the minimum 
zip/jar layout for what to constitute a plugin. What if we choose our own file 
ending, e.g. {{.solrplugin}} which is really just a zip file. Then a possible 
layout could be
{noformat}
my-plugin.solrplugin
/solrplugin.properties -- defining e.g. name, version, dependencies etc
/lib/  (jars to be added to classpath)
/config/config-api-commands.json  (a series of commands to be run towards 
config API)
/config/schema-api-commands.json  (a series of commands to be run towards 
config API)
{noformat}

Then, over time, we can evolve the spec and add support for pluggable UI etc.

There are tons of questions to address too
* Size limitation in blob store?
* Should Solr unpack all libs from the {{.solrplugin}} pkg and add them 
individually to blob store, or write a classloader that adds everything 
directly from the zip?
* What about version clash of dependencies - should not jeopardize the rest of 
the system
* Should a plugin be allowed to depend on another plugin, i.e. fail install 
unless the other plugin is installed?
* Should all plugins be system-level, and then require a new config-API 
{{/collection/config install-plugin}} command to enable it for each 
collection?
* What about system-level plugins, such as Authentication and Authorization 
plugins? Should {{security.json}} be auto updated when installing an auth 
plugin, or only if it does not exist already?
* There should be a way to install plugins without registering the component 
with APIs, e.g. {{bin/solr installplugin solrcell -noregister}}
* Uninstall of a plugin - it should also be able to unregister things from 
config / schema, for all collections where it is enabled (scary)

First step is to see if there is enough committer interest in going down this 
path, then split things up into many smaller tasks that can be handled 
separately.


was (Author: janhoy):
bq. I think that's a little more to a plugin than upload mechanics. There is a 
need for meta-data convention, dependency handling, versioning, etc. 
Supplementary UI.

Totally agree. But we need to start in one end. Perhaps defining the minimum 
zip/jar layout for what to constitute a plugin. What if we choose our own file 
ending, e.g. {.solrplugin} which is really just a zip file. Then a possible 
layout could be
{noformat}
my-plugin.solrplugin
/solrplugin.properties -- defining e.g. name, version, dependencies etc
/lib/  (jars to be added to classpath)
/config/config-api-commands.json  (a series of commands to be run towards 
config API)
/config/schema-api-commands.json  (a series of commands to be run towards 
config API)
{noformat}

Then, over time, we can evolve the spec and add support for pluggable UI etc.

There are tons of questions to address too
* Size limitation in blob store?
* Should Solr unpack all libs from the {{.solrplugin}} pkg and add them 
individually to blob store, or write a classloader that adds everything 
directly from the zip?
* What about version clash of dependencies - should not jeopardize the rest of 
the system
* Should all plugins be system-level, and then require a new 
{{/collection/config install-plugin}} command to enable it for each 
collection?
* What about system-level plugins, such as Authentication and Authorization 
plugins? Should {{security.json}} be auto updated when installing an auth 
plugin, or only if it does not exist already?
* There should be a way to install plugins without registering the component 
with APIs, e.g. {{bin/solr installplugin solrcell -noregister}}
* Uninstall of a plugin - it should also be able to unregister things from 
config / schema, for all collections where it is enabled (scary)

First step is to see if there is enough committer interest in going down this 
path, then split things up into many smaller tasks that can be handled 
separately.

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703009#comment-14703009
 ] 

Noble Paul commented on SOLR-5103:
--

If someone wants to offer a plugin with dependency jars , please make a jar 
with all the dependencies included. Once you make that assumption everything 
else becomes simpler.

Then it is still possible to write a script which runs as follows

{noformat}
/bin/solr install-plugin xyz.jar -conf xyz.json -c collection1
{noformat}

or we can package the xyz.jar in such a way that the META-INF contains 
{{plugin.json}} and the command can be simplified to
{noformat}
/bin/solr install-plugin xyz.jar -c collection1
{noformat}

bq.Size limitation in blob store?

as of now it is set to 5MB by default . However the user can increase it to any 
number with a command


 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703077#comment-14703077
 ] 

Michael McCandless commented on LUCENE-6699:


OK I committed a fix for the testBKDRandom ... just a test bug: it failed to 
detect the SHAPE_INSIDE_CELL case.

Just need to understand the assert trips for testRandomXXX ...

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b60) - Build # 13918 - Still Failing!

2015-08-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13918/
Java: 32bit/jdk1.9.0-ea-b60 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 19832 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:775: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:130: Found 1 
violations in source files (tabs instead spaces).

Total time: 55 minutes 53 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-19 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703053#comment-14703053
 ] 

Shalin Shekhar Mangar commented on SOLR-6760:
-

I saw test failures twice in a row but I can't reproduce them on beasting.

Looking at the test itself I can't figure out why it would fail like this:
{code}
java.util.NoSuchElementException
at 
__randomizedtesting.SeedInfo.seed([FD3C9C458FB6F5FD:9E2CC8B3009169AB]:0)
at 
org.apache.solr.cloud.DistributedQueue.remove(DistributedQueue.java:203)
at 
org.apache.solr.cloud.DistributedQueueTest.testDistributedQueue(DistributedQueueTest.java:62)
{code}

Perhaps related to the tests being executed in a wrong order? Maybe each test 
method should get its own ZK server or at least a different znode path for the 
queue?

In case it helps, the seeds were:
{code}
ant test  -Dtestcase=DistributedQueueExtTest 
-Dtests.method=testDistributedQueue -Dtests.seed=F156B17A5CB6EA4 
-Dtests.slow=true -Dtests.locale=ca -Dtests.timezone=Europe/Vilnius 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

ant test  -Dtestcase=DistributedQueueTest -Dtests.method=testDistributedQueue 
-Dtests.seed=FD3C9C458FB6F5FD -Dtests.slow=true -Dtests.locale=en_AU 
-Dtests.timezone=Australia/Lindeman -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
{code}

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703064#comment-14703064
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1696591 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1696591 ]

LUCENE-6699: also detect when SHAPE_INSIDE_CELL in test case

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702871#comment-14702871
 ] 

Noble Paul commented on SOLR-5103:
--

I fail to see what are you trying to solve here. Please give the problem 
statement and state the proposed solution. I guess it is worth opening a new 
ticket

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7932) Solr replication relies on timestamps to sync across machines

2015-08-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702911#comment-14702911
 ] 

Mark Miller commented on SOLR-7932:
---

bq. you still have to deal with clock skew though

Why? Don't both times come from the master? 

bq. Do you agree that the timestamp check can be removed there?

I don't think it can just be removed in either case without better replacement 
logic.

 Solr replication relies on timestamps to sync across machines
 -

 Key: SOLR-7932
 URL: https://issues.apache.org/jira/browse/SOLR-7932
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Ramkumar Aiyengar
 Attachments: SOLR-7932.patch


 Spinning off SOLR-7859, noticed there that wall time recorded as commit data 
 on a commit to check if replication needs to be done. In IndexFetcher, there 
 is this code:
 {code}
   if (!forceReplication  
 IndexDeletionPolicyWrapper.getCommitTimestamp(commit) == latestVersion) {
 //master and slave are already in sync just return
 LOG.info(Slave in sync with master.);
 successfulInstall = true;
 return true;
   }
 {code}
 It appears as if we are checking wall times across machines to check if we 
 are in sync, this could go wrong.
 Once a decision is made to replicate, we do seem to use generations instead, 
 except for this place below checks both generations and timestamps to see if 
 a full copy is needed..
 {code}
   // if the generation of master is older than that of the slave , it 
 means they are not compatible to be copied
   // then a new index directory to be created and all the files need to 
 be copied
   boolean isFullCopyNeeded = IndexDeletionPolicyWrapper
   .getCommitTimestamp(commit) = latestVersion
   || commit.getGeneration() = latestGeneration || forceReplication;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702953#comment-14702953
 ] 

Noble Paul commented on SOLR-5103:
--

We don't yet have a concept of a plugin offering a UI . 


 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703017#comment-14703017
 ] 

Jan Høydahl commented on SOLR-5103:
---

bq. If someone wants to offer a plugin with dependency jars , please make a jar 
with all the dependencies included.
Well that is for us to dictate, isn't it? Who says the plugin format must 
conform to the JAR specs? Is it technically problematic to add all classes from 
multiple JARs from within a ZIP to a classloader? If yes, the user-facing 
plugin format could still be a zip with multiple jars in a lib folder, and our 
plugin installer code handles merging all jars together into one jar 
{{plugin-xyz-merged.jar}} which is the one being registered with the 
classloader?

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_51) - Build # 5172 - Failure!

2015-08-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5172/
Java: 32bit/jdk1.8.0_51 -server -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 20022 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:775: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:130: Found 
1 violations in source files (tabs instead spaces).

Total time: 78 minutes 36 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b60) - Build # 13919 - Still Failing!

2015-08-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13919/
Java: 32bit/jdk1.9.0-ea-b60 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([273F2281C0739CA1:807B9A25ADC88F18]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplicationWithTruncatedTlog(CdcrReplicationHandlerTest.java:121)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703072#comment-14703072
 ] 

Alexandre Rafalovitch commented on SOLR-5103:
-

Only the classloader leakage hell! :=(

There is no standard mechanism for inlining the dependencies, though there are 
a couple of different approaches, all doing weird and less than wonderful 
things with classloader mechanisms. Something like 
http://maven.apache.org/plugins/maven-shade-plugin/ and 
http://docs.spring.io/spring-boot/docs/1.3.0.BUILD-SNAPSHOT/reference/htmlsingle/#executable-jar

Or, of course, OSGI 
http://www.javaworld.com/article/2075836/description-of-osgi-layer.html

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702972#comment-14702972
 ] 

Alexandre Rafalovitch commented on SOLR-5103:
-

Actually, *wt=velocityv.template.x* allow to serve a velocity template around 
the search results. Unless broken by recent changes, etc. [~ehatcher] may be 
able to add details. 

And, if admin-extra was loaded from classpath, there might be a way to wire it 
in into UI automatically.

It may not be pretty on the first round, but at least something like /browse 
might be more possible than expected.

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3428 - Failure

2015-08-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3428/

All tests passed

Build Log:
[...truncated 20350 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:785: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:130: 
Found 1 violations in source files (tabs instead spaces).

Total time: 97 minutes 2 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-5.x-Java7 #3427
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 0.43 sec
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (SOLR-7945) Make server/build.xml ivy:retrieve tasks respect ivy.sync property.

2015-08-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-7945.
---
   Resolution: Fixed
Fix Version/s: 5.4
   Trunk

 Make server/build.xml ivy:retrieve tasks respect ivy.sync property.
 ---

 Key: SOLR-7945
 URL: https://issues.apache.org/jira/browse/SOLR-7945
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: Trunk, 5.4


 To run multiple ant targets in parallel we must disable ivy sync as it can 
 race with resolve and corrupt jar files in the .ivy2 cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7932) Solr replication relies on timestamps to sync across machines

2015-08-19 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703067#comment-14703067
 ] 

Varun Thacker commented on SOLR-7932:
-

If I understand it correctly we don't need the timestamp check in a 
master-slave setup. The reason being since the index on the slave is coming 
from the master both timestamp and generation will be the same. So just 
checking generation will be enough right?

In cloud mode, commits on different replicas happen at different times so the 
timestamps would always be different. But this code path with only get invoked 
during a recovery. So we could remove it for this use case as well right? 

 Solr replication relies on timestamps to sync across machines
 -

 Key: SOLR-7932
 URL: https://issues.apache.org/jira/browse/SOLR-7932
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Ramkumar Aiyengar
 Attachments: SOLR-7932.patch


 Spinning off SOLR-7859, noticed there that wall time recorded as commit data 
 on a commit to check if replication needs to be done. In IndexFetcher, there 
 is this code:
 {code}
   if (!forceReplication  
 IndexDeletionPolicyWrapper.getCommitTimestamp(commit) == latestVersion) {
 //master and slave are already in sync just return
 LOG.info(Slave in sync with master.);
 successfulInstall = true;
 return true;
   }
 {code}
 It appears as if we are checking wall times across machines to check if we 
 are in sync, this could go wrong.
 Once a decision is made to replicate, we do seem to use generations instead, 
 except for this place below checks both generations and timestamps to see if 
 a full copy is needed..
 {code}
   // if the generation of master is older than that of the slave , it 
 means they are not compatible to be copied
   // then a new index directory to be created and all the files need to 
 be copied
   boolean isFullCopyNeeded = IndexDeletionPolicyWrapper
   .getCommitTimestamp(commit) = latestVersion
   || commit.getGeneration() = latestGeneration || forceReplication;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702990#comment-14702990
 ] 

Alexandre Rafalovitch commented on SOLR-5103:
-

My position is still that madness lies that way if we try to reinvent plugin 
management from scratch and that 3rd party solution may make better sense for 
that. Even if that solution is not perfect.

But, on the other hand, I don't know internals of Solr well enough to know 
whether there is enough extensibility built-in to allow that.

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703047#comment-14703047
 ] 

Noble Paul commented on SOLR-5103:
--

bq.Well that is for us to dictate, isn't it? Who says the plugin format must 
conform to the JAR specs?

It's OK for the plugin format not conform to jar specs. The payload could be a 
zip file with multiple jars. But, is it a big deal to merge multiple jars. It 
solves a problem of name collisions. What if the same FQN of a class is found 
in two jars , which one to load? 

bq. and our plugin installer code handles merging all jars together into one 
jar plugin-xyz-merged.jar 

Please don't . The plugin installer should do as little magic as possible.

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703050#comment-14703050
 ] 

Shawn Heisey commented on SOLR-5103:


bq. If someone wants to offer a plugin with dependency jars , please make a jar 
with all the dependencies included. Once you make that assumption everything 
else becomes simpler.

I worry about this idea a little bit.  Maybe I don't need to be worried.  What 
happens if a plugin project uses one of the same dependent jars as Solr, but 
packages a wildly different version than the version we package?  Are there any 
situations where that might cause a larger problem than using a little more 
memory?


 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702925#comment-14702925
 ] 

Noble Paul edited comment on SOLR-5103 at 8/19/15 12:18 PM:


I guess I'm getting what you are trying to say

Now we need to follow the following steps to register a plugin from a jar 
(using SOLR-7073)

# upload the jar to {{.system}} collection
# add the jar to collection classpath using the {{add-runtime-lib}} command
# add the component using the {{add-component-name}} command

So you are trying to eliminate step #3 here. Is that right ?


was (Author: noble.paul):
I guess I'm getting what you are trying to say

Now we need to follow the following steps to register a plugin from a jar 
(using SOLR-7073)

# upload the jar to {{.system}} collection
# add the jar to collection classpath using the {{add-runtime-lib}} command
# add the component using the {{add-companent-name}} command

So you are trying to eliminate step #3 here. Is that right ?

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7945) Make server/build.xml ivy:retrieve tasks respect ivy.sync property.

2015-08-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702867#comment-14702867
 ] 

ASF subversion and git services commented on SOLR-7945:
---

Commit 1696560 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1696560 ]

SOLR-7945: Make server/build.xml ivy:retrieve tasks respect ivy.sync property.

 Make server/build.xml ivy:retrieve tasks respect ivy.sync property.
 ---

 Key: SOLR-7945
 URL: https://issues.apache.org/jira/browse/SOLR-7945
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor

 To run multiple ant targets in parallel we must disable ivy sync as it can 
 race with resolve and corrupt jar files in the .ivy2 cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702925#comment-14702925
 ] 

Noble Paul edited comment on SOLR-5103 at 8/19/15 12:17 PM:


I guess I'm getting what you are trying to say

Now we need to follow the following steps to register a plugin from a jar 
(using SOLR-7073)

# upload the jar to {{.system}} collection
# add the jar to collection classpath using the {{add-runtime-lib}} command
# add the component using the {{add-companent-name}} command

So you are trying to eliminate step #3 here. Is that right ?


was (Author: noble.paul):
I guess I'm getting what you are trying to say

Now we need to follow the following steps to register a plugin from a jar 
(using SOLR-7073)

# upload the jar to {{.system}} collection
# add the jar to collection classpath using the {{add-runtime-lib}} command
# add the component using the {{add-companent-naam}} command

So you are trying to eliminate step #3 here. Is that right ?

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7932) Solr replication relies on timestamps to sync across machines

2015-08-19 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702927#comment-14702927
 ] 

Ramkumar Aiyengar commented on SOLR-7932:
-

bq. Why? Don't both times come from the master?

A clock skew could cause two different commits to have the same time (commit 1 
happens at time X, NTP sets the clock back by 200ms. 200ms later, commit 2 
happens). It's not exactly what's in this title (i.e. relying on timestamps 
across machines), and you have to be a lot more unlucky, but you can't rely on 
wall time even in the same machine.

bq. I don't think it can just be removed in either case without better 
replacement logic.

How does the timestamp help currently in the first case? We are anyway using 
generations immediately following, so won't you be better off comparing 
generations instead to check if replication can be skipped?


 Solr replication relies on timestamps to sync across machines
 -

 Key: SOLR-7932
 URL: https://issues.apache.org/jira/browse/SOLR-7932
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Ramkumar Aiyengar
 Attachments: SOLR-7932.patch


 Spinning off SOLR-7859, noticed there that wall time recorded as commit data 
 on a commit to check if replication needs to be done. In IndexFetcher, there 
 is this code:
 {code}
   if (!forceReplication  
 IndexDeletionPolicyWrapper.getCommitTimestamp(commit) == latestVersion) {
 //master and slave are already in sync just return
 LOG.info(Slave in sync with master.);
 successfulInstall = true;
 return true;
   }
 {code}
 It appears as if we are checking wall times across machines to check if we 
 are in sync, this could go wrong.
 Once a decision is made to replicate, we do seem to use generations instead, 
 except for this place below checks both generations and timestamps to see if 
 a full copy is needed..
 {code}
   // if the generation of master is older than that of the slave , it 
 means they are not compatible to be copied
   // then a new index directory to be created and all the files need to 
 be copied
   boolean isFullCopyNeeded = IndexDeletionPolicyWrapper
   .getCommitTimestamp(commit) = latestVersion
   || commit.getGeneration() = latestGeneration || forceReplication;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702925#comment-14702925
 ] 

Noble Paul commented on SOLR-5103:
--

I guess I'm getting what you are trying to say

Now we need to follow the following steps to register a plugin from a jar 
(using SOLR-7073)

# upload the jar to {{.system}} collection
# add the jar to collection classpath using the {{add-runtime-lib}} command
# add the component using the {{add-companent-naam}} command

So you are trying to eliminate step #3 here. Is that right ?

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702955#comment-14702955
 ] 

Jan Høydahl commented on SOLR-5103:
---

bq. So you are trying to eliminate step #3 here. Is that right ?
Not primarily. Your list is over simplified. Today:
# Realize that there is a plugin for what you want to do
# Locate and download that plugin
# Read the docs and find and download any dependencies of the plugin
#* Example: dist/solr-cell-5.0.0.jar is the plugin, and you also have 34 
dependency jars from contrib/lib/
# For each jar required (35 in total):
#* {{curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
@filename.jar http://localhost:8983/solr/.system/blob/name }}
# For each collection in the system (that needs the plugin):
#* For each jar that belongs to the plugin:
#** Put each jar on the classpath for the collection
#** {code}
curl http://localhost:8983/solr/collection1/config -H 
'Content-type:application/json'  -d '{
add-runtimelib : {name: jarname , version:2 },
update-runtimelib :{name: jarname ,version:3},
delete-runtimelib :jarname 
}' 
{code}
#** Register the plugin into config or schema or zookeeper or solr.xml 
depending on the type, e.g.
#** {code}
{add-searchcomponent:
   name:elevator,
   class:QueryElevationComponent,
   queryFieldType:string,
   config-file:elevate.xml
}
{code}

Not to mention when you want to upgrade the plugin to a newer version, or 
uninstall it..

Now compare this to a click in the Admin UI or:

{noformat}
bin/solr installplugin solrcell 5.2.1
bin/solr removeplugin solrcell
bin/solr installplugin solrcell 5.3.0
{noformat}

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703056#comment-14703056
 ] 

Michael McCandless commented on LUCENE-6699:


I'm seeing failures in both testBKDRandom (proper BKD decomposition), from the 
new assert I added, and in testRandomMedium, where hits were expected but not 
found ... I'll dig on the first one, seems maybe easier :)

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703091#comment-14703091
 ] 

Karl Wright edited comment on LUCENE-6699 at 8/19/15 2:27 PM:
--

Ok, I'm swamped at the moment, so anything you can do to describe the sequence 
of interactions with Geo3D that demonstrate a problem or inconsistency would be 
very useful.  I will have time Thursday evening and Friday morning to look at 
those in detail I think. ;-)



was (Author: kwri...@metacarta.com):
Ok, I'm swamped at the moment, so anything you can do to describe the sequence 
of interactions with Geo3D that demonstrate a problem would be very useful.  I 
will have time Thursday evening and Friday morning to look at those in detail I 
think. ;-)


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-19 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703277#comment-14703277
 ] 

Scott Blum commented on SOLR-6760:
--

Fixed.  To recap IRC discussion, the old test code assumed that calling offer() 
would result in the offered element immediately being available from poll().  
This is contrary to the design decision in the new DQ.

1) Fixed the test code assumptions and generally cleaned up the test code to be 
more clear.
2) Documented offer() semantics.

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, 
 deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2647 - Failure!

2015-08-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2647/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 19831 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:775: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:130: Found 1 
violations in source files (tabs instead spaces).

Total time: 75 minutes 29 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-19 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703277#comment-14703277
 ] 

Scott Blum edited comment on SOLR-6760 at 8/19/15 4:25 PM:
---

Fixed.  To recap IRC discussion, the old test code assumed that calling offer() 
would result in the offered element immediately being available from poll().  
This is contrary to the design decision in the new DQ.

1) Fixed the test code assumptions and generally cleaned up the test code to be 
more clear.
2) Documented offer() semantics.

On the decision to decouple offer() and poll():

The rationale for the design decision is that being the offering VM is not 
special.  In the general case different VMs often offer while a single VM is 
doing the polling, the two are decoupled in reality.  So there's no real need 
to guarantee that offer() followed by poll() in a single VM always returns the 
element immediately



was (Author: dragonsinth):
Fixed.  To recap IRC discussion, the old test code assumed that calling offer() 
would result in the offered element immediately being available from poll().  
This is contrary to the design decision in the new DQ.

1) Fixed the test code assumptions and generally cleaned up the test code to be 
more clear.
2) Documented offer() semantics.

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, 
 deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6747) FingerprintFilter - a TokenFilter for clustering/linking purposes

2015-08-19 Thread Mark Harwood (JIRA)
Mark Harwood created LUCENE-6747:


 Summary: FingerprintFilter - a TokenFilter for clustering/linking 
purposes
 Key: LUCENE-6747
 URL: https://issues.apache.org/jira/browse/LUCENE-6747
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/analysis
Reporter: Mark Harwood
Priority: Minor


A TokenFilter that emits a single token which is a sorted, de-duplicated set of 
the input tokens.
This approach to normalizing text is used in tools like OpenRefine[1] and 
elsewhere [2] to help in clustering or linking texts.
The implementation proposed here has a an upper limit on the size of the 
combined token which is output.

[1] https://github.com/OpenRefine/OpenRefine/wiki/Clustering-In-Depth
[2] 
https://rajmak.wordpress.com/2013/04/27/clustering-text-map-reduce-in-python/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703087#comment-14703087
 ] 

Erik Hatcher commented on SOLR-5103:


bq. Actually, wt=velocityv.template.x allow to serve a velocity template 
around the search results. 

:)

bq. And, if admin-extra was loaded from classpath, there might be a way to wire 
it in into UI automatically. It may not be pretty on the first round, but at 
least something like /browse might be more possible than expected.

VrW has a VelocitySolrResourceLoader which pulls templates (anything textual 
would work), but it is constrained to only look under a `velocity/` sub-tree 
from the resource loader root. 

I'm not quite getting what you're getting at here though, but one can #parse 
another velocity template or #include anything textual with no parsing.  And 
easily wrap a response with a layout to get headers/footers, etc.

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 769 - Still Failing

2015-08-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/769/

1 tests failed.
REGRESSION:  
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload

Error Message:
expected:[{indexVersion=1439994299729,generation=2,filelist=[_0.fdt, _0.fdx, 
_0.fnm, _0.nvd, _0.nvm, _0.si, _0_LuceneVarGapFixedInterval_0.doc, 
_0_LuceneVarGapFixedInterval_0.tib, _0_LuceneVarGapFixedInterval_0.tiv, _1.fdt, 
_1.fdx, _1.fnm, _1.nvd, _1.nvm, _1.si, _1_LuceneVarGapFixedInterval_0.doc, 
_1_LuceneVarGapFixedInterval_0.tib, _1_LuceneVarGapFixedInterval_0.tiv, _2.fdt, 
_2.fdx, _2.fnm, _2.nvd, _2.nvm, _2.si, _2_LuceneVarGapFixedInterval_0.doc, 
_2_LuceneVarGapFixedInterval_0.tib, _2_LuceneVarGapFixedInterval_0.tiv, _3.fdt, 
_3.fdx, _3.fnm, _3.nvd, _3.nvm, _3.si, _3_LuceneVarGapFixedInterval_0.doc, 
_3_LuceneVarGapFixedInterval_0.tib, _3_LuceneVarGapFixedInterval_0.tiv, _4.fdt, 
_4.fdx, _4.fnm, _4.nvd, _4.nvm, _4.si, _4_LuceneVarGapFixedInterval_0.doc, 
_4_LuceneVarGapFixedInterval_0.tib, _4_LuceneVarGapFixedInterval_0.tiv, _5.fdt, 
_5.fdx, _5.fnm, _5.nvd, _5.nvm, _5.si, _5_LuceneVarGapFixedInterval_0.doc, 
_5_LuceneVarGapFixedInterval_0.tib, _5_LuceneVarGapFixedInterval_0.tiv, 
segments_2]}] but 
was:[{indexVersion=1439994299729,generation=2,filelist=[_0.fdt, _0.fdx, 
_0.fnm, _0.nvd, _0.nvm, _0.si, _0_LuceneVarGapFixedInterval_0.doc, 
_0_LuceneVarGapFixedInterval_0.tib, _0_LuceneVarGapFixedInterval_0.tiv, _1.fdt, 
_1.fdx, _1.fnm, _1.nvd, _1.nvm, _1.si, _1_LuceneVarGapFixedInterval_0.doc, 
_1_LuceneVarGapFixedInterval_0.tib, _1_LuceneVarGapFixedInterval_0.tiv, _2.fdt, 
_2.fdx, _2.fnm, _2.nvd, _2.nvm, _2.si, _2_LuceneVarGapFixedInterval_0.doc, 
_2_LuceneVarGapFixedInterval_0.tib, _2_LuceneVarGapFixedInterval_0.tiv, _3.fdt, 
_3.fdx, _3.fnm, _3.nvd, _3.nvm, _3.si, _3_LuceneVarGapFixedInterval_0.doc, 
_3_LuceneVarGapFixedInterval_0.tib, _3_LuceneVarGapFixedInterval_0.tiv, _4.fdt, 
_4.fdx, _4.fnm, _4.nvd, _4.nvm, _4.si, _4_LuceneVarGapFixedInterval_0.doc, 
_4_LuceneVarGapFixedInterval_0.tib, _4_LuceneVarGapFixedInterval_0.tiv, _5.fdt, 
_5.fdx, _5.fnm, _5.nvd, _5.nvm, _5.si, _5_LuceneVarGapFixedInterval_0.doc, 
_5_LuceneVarGapFixedInterval_0.tib, _5_LuceneVarGapFixedInterval_0.tiv, 
segments_2]}, {indexVersion=1439994299729,generation=3,filelist=[_6.fdt, 
_6.fdx, _6.fnm, _6.nvd, _6.nvm, _6.si, _6_LuceneVarGapFixedInterval_0.doc, 
_6_LuceneVarGapFixedInterval_0.tib, _6_LuceneVarGapFixedInterval_0.tiv, 
segments_3]}]

Stack Trace:
java.lang.AssertionError: 
expected:[{indexVersion=1439994299729,generation=2,filelist=[_0.fdt, _0.fdx, 
_0.fnm, _0.nvd, _0.nvm, _0.si, _0_LuceneVarGapFixedInterval_0.doc, 
_0_LuceneVarGapFixedInterval_0.tib, _0_LuceneVarGapFixedInterval_0.tiv, _1.fdt, 
_1.fdx, _1.fnm, _1.nvd, _1.nvm, _1.si, _1_LuceneVarGapFixedInterval_0.doc, 
_1_LuceneVarGapFixedInterval_0.tib, _1_LuceneVarGapFixedInterval_0.tiv, _2.fdt, 
_2.fdx, _2.fnm, _2.nvd, _2.nvm, _2.si, _2_LuceneVarGapFixedInterval_0.doc, 
_2_LuceneVarGapFixedInterval_0.tib, _2_LuceneVarGapFixedInterval_0.tiv, _3.fdt, 
_3.fdx, _3.fnm, _3.nvd, _3.nvm, _3.si, _3_LuceneVarGapFixedInterval_0.doc, 
_3_LuceneVarGapFixedInterval_0.tib, _3_LuceneVarGapFixedInterval_0.tiv, _4.fdt, 
_4.fdx, _4.fnm, _4.nvd, _4.nvm, _4.si, _4_LuceneVarGapFixedInterval_0.doc, 
_4_LuceneVarGapFixedInterval_0.tib, _4_LuceneVarGapFixedInterval_0.tiv, _5.fdt, 
_5.fdx, _5.fnm, _5.nvd, _5.nvm, _5.si, _5_LuceneVarGapFixedInterval_0.doc, 
_5_LuceneVarGapFixedInterval_0.tib, _5_LuceneVarGapFixedInterval_0.tiv, 
segments_2]}] but 
was:[{indexVersion=1439994299729,generation=2,filelist=[_0.fdt, _0.fdx, 
_0.fnm, _0.nvd, _0.nvm, _0.si, _0_LuceneVarGapFixedInterval_0.doc, 
_0_LuceneVarGapFixedInterval_0.tib, _0_LuceneVarGapFixedInterval_0.tiv, _1.fdt, 
_1.fdx, _1.fnm, _1.nvd, _1.nvm, _1.si, _1_LuceneVarGapFixedInterval_0.doc, 
_1_LuceneVarGapFixedInterval_0.tib, _1_LuceneVarGapFixedInterval_0.tiv, _2.fdt, 
_2.fdx, _2.fnm, _2.nvd, _2.nvm, _2.si, _2_LuceneVarGapFixedInterval_0.doc, 
_2_LuceneVarGapFixedInterval_0.tib, _2_LuceneVarGapFixedInterval_0.tiv, _3.fdt, 
_3.fdx, _3.fnm, _3.nvd, _3.nvm, _3.si, _3_LuceneVarGapFixedInterval_0.doc, 
_3_LuceneVarGapFixedInterval_0.tib, _3_LuceneVarGapFixedInterval_0.tiv, _4.fdt, 
_4.fdx, _4.fnm, _4.nvd, _4.nvm, _4.si, _4_LuceneVarGapFixedInterval_0.doc, 
_4_LuceneVarGapFixedInterval_0.tib, _4_LuceneVarGapFixedInterval_0.tiv, _5.fdt, 
_5.fdx, _5.fnm, _5.nvd, _5.nvm, _5.si, _5_LuceneVarGapFixedInterval_0.doc, 
_5_LuceneVarGapFixedInterval_0.tib, _5_LuceneVarGapFixedInterval_0.tiv, 
segments_2]}, {indexVersion=1439994299729,generation=3,filelist=[_6.fdt, 
_6.fdx, _6.fnm, _6.nvd, _6.nvm, _6.si, _6_LuceneVarGapFixedInterval_0.doc, 
_6_LuceneVarGapFixedInterval_0.tib, _6_LuceneVarGapFixedInterval_0.tiv, 
segments_3]}]
at 
__randomizedtesting.SeedInfo.seed([8946ACC34EE1012F:AC91B7F33EA90F2C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at 

[jira] [Updated] (LUCENE-6747) FingerprintFilter - a TokenFilter for clustering/linking purposes

2015-08-19 Thread Mark Harwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Harwood updated LUCENE-6747:
-
Attachment: fingerprintv1.patch

Proposed implementation and test

 FingerprintFilter - a TokenFilter for clustering/linking purposes
 -

 Key: LUCENE-6747
 URL: https://issues.apache.org/jira/browse/LUCENE-6747
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/analysis
Reporter: Mark Harwood
Priority: Minor
 Attachments: fingerprintv1.patch


 A TokenFilter that emits a single token which is a sorted, de-duplicated set 
 of the input tokens.
 This approach to normalizing text is used in tools like OpenRefine[1] and 
 elsewhere [2] to help in clustering or linking texts.
 The implementation proposed here has a an upper limit on the size of the 
 combined token which is output.
 [1] https://github.com/OpenRefine/OpenRefine/wiki/Clustering-In-Depth
 [2] 
 https://rajmak.wordpress.com/2013/04/27/clustering-text-map-reduce-in-python/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-19 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703121#comment-14703121
 ] 

Upayavira commented on SOLR-5103:
-

All the above seems to be describing OSGi, for which we have the Apache Felix 
project here at Apache. If we are considering plugins that can hold 
conflicting dependencies, that does seem the obvious way to go.

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_80) - Build # 5042 - Failure!

2015-08-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5042/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 20312 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:785: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:130: Found 1 
violations in source files (tabs instead spaces).

Total time: 86 minutes 3 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-19 Thread Scott Blum (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Blum updated SOLR-6760:
-
Attachment: SOLR-6760.patch

Rewrite DQ test code to fix race, document offer() semantics.

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, 
 deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703096#comment-14703096
 ] 

Michael McCandless commented on LUCENE-6699:


OK I'll see if I can contain the failure somehow ...

Or: we could maybe disable this buggy opto for now, and wrap up / land this 
branch?  I.e. open a follow-on issue for the opto.

The tests seem to pass w/o the opto.

The opto should be a sizable win for smallish query shapes, because it means 
BKD tree can recurse with very fast bbox overlap checking (a few if statements, 
no new objects) instead of building a GeoArea for each BKD cell (3d rect) and 
then relating that to the shape, as it recurses.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 295 - Failure

2015-08-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/295/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
commitWithin did not work on node: http://127.0.0.1:54057/collection1 
expected:68 but was:67

Stack Trace:
java.lang.AssertionError: commitWithin did not work on node: 
http://127.0.0.1:54057/collection1 expected:68 but was:67
at 
__randomizedtesting.SeedInfo.seed([C42F9429DE77AD4D:4C7BABF3708BC0B5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:333)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_51) - Build # 13655 - Failure!

2015-08-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13655/
Java: 32bit/jdk1.8.0_51 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 20312 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:785: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:130: Found 1 violations 
in source files (tabs instead spaces).

Total time: 59 minutes 29 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

VOTE: RC0 Release of apache-solr-ref-guide-5.3.pdf

2015-08-19 Thread Cassandra Targett
Please VOTE to release the following as apache-solr-ref-guide-5.3.pdf.

https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.3-RC0/

$ cat apache-solr-ref-guide-5.3-RC0/apache-solr-ref-guide-5.3.pdf.sha1

076fa1cb986a8bc8ac873e65e6ef77a841336221  apache-solr-ref-guide-5.3.pdf


Thanks,

Cassandra


[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703091#comment-14703091
 ] 

Karl Wright commented on LUCENE-6699:
-

Ok, I'm swamped at the moment, so anything you can do to describe the sequence 
of interactions with Geo3D that demonstrate a problem would be very useful.  I 
will have time Thursday evening and Friday morning to look at those in detail I 
think. ;-)


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703130#comment-14703130
 ] 

Karl Wright commented on LUCENE-6699:
-

Hi [~mikemccand],

I'm not comfortable with landing the branch until we at least understand the 
problem.  If the tests always pass without the optimization, then the problem 
must be that the bounds are simply incorrect for some particular shape.  It 
should be trivial to determine which shape leads to a bad bounds, and I can 
chase it from there.  Can we confirm that picture?

 

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_51) - Build # 5172 - Failure!

2015-08-19 Thread Adrien Grand
I removed the tab from common-build.xml.

On Wed, Aug 19, 2015 at 3:40 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5172/
 Java: 32bit/jdk1.8.0_51 -server -XX:+UseSerialGC

 All tests passed

 Build Log:
 [...truncated 20022 lines...]
 BUILD FAILED
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:775: The 
 following error occurred while executing this line:
 C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:130: 
 Found 1 violations in source files (tabs instead spaces).

 Total time: 78 minutes 36 seconds
 Build step 'Invoke Ant' marked build as failure
 Archiving artifacts
 [WARNINGS] Skipping publisher since build result is FAILURE
 Recording test results
 Email was triggered for: Failure - Any
 Sending email for trigger: Failure - Any




 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6746) Compound queries should create sub weights through IndexSearcher

2015-08-19 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6746:
-
Attachment: LUCENE-6746.patch

Here is a patch.

 Compound queries should create sub weights through IndexSearcher
 

 Key: LUCENE-6746
 URL: https://issues.apache.org/jira/browse/LUCENE-6746
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6746.patch


 By creating sub weights through IndexSearcher, we give IndexSearcher a chance 
 to add a caching wrapper. We were already doing it for BooleanQuery and 
 ConstantScoreQuery but forgot to also modify DisjunctionMaxQuery, 
 BoostingQuery and BoostedQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6746) Compound queries should create sub weights through IndexSearcher

2015-08-19 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6746:


 Summary: Compound queries should create sub weights through 
IndexSearcher
 Key: LUCENE-6746
 URL: https://issues.apache.org/jira/browse/LUCENE-6746
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6746.patch

By creating sub weights through IndexSearcher, we give IndexSearcher a chance 
to add a caching wrapper. We were already doing it for BooleanQuery and 
ConstantScoreQuery but forgot to also modify DisjunctionMaxQuery, BoostingQuery 
and BoostedQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60-ea-b25) - Build # 13921 - Failure!

2015-08-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13921/
Java: 32bit/jdk1.8.0_60-ea-b25 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.test

Error Message:
Error from server at http://127.0.0.1:46227/gx/checkStateVerCol: no servers 
hosting shard: shard2

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:46227/gx/checkStateVerCol: no servers hosting 
shard: shard2
at 
__randomizedtesting.SeedInfo.seed([D1A4036987FD3D7A:59F03CB329015082]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.stateVersionParamTest(CloudSolrClientTest.java:542)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.test(CloudSolrClientTest.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6699:

Attachment: LUCENE-6699.patch

[~mikemccand]: Found it.  Patch attached.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703503#comment-14703503
 ] 

Karl Wright commented on LUCENE-6699:
-

Investigating another failure:

{code}
   [junit4] Suite: org.apache.lucene.bkdtree3d.TestGeo3DPointField
   [junit4]   2 VIII 19, 2015 11:13:35 PM com.carrotsearch.randomizedtesting.Ra
ndomizedRunner$QueueUncaughtExceptionsHandler uncaughtException
   [junit4]   2 WARNING: Uncaught exception in thread: Thread[T3,5,TGRP-TestGeo
3DPointField]
   [junit4]   2 java.lang.AssertionError
   [junit4]   2at __randomizedtesting.SeedInfo.seed([D03EF31A709F9117]:
0)
   [junit4]   2at org.apache.lucene.bkdtree3d.PointInGeo3DShapeQuery$1.
scorer(PointInGeo3DShapeQuery.java:105)
   [junit4]   2at org.apache.lucene.search.LRUQueryCache$CachingWrapper
Weight.scorer(LRUQueryCache.java:581)
   [junit4]   2at org.apache.lucene.search.Weight.bulkScorer(Weight.jav
a:135)
   [junit4]   2at org.apache.lucene.search.AssertingWeight.bulkScorer(A
ssertingWeight.java:69)
   [junit4]   2at org.apache.lucene.search.AssertingWeight.bulkScorer(A
ssertingWeight.java:69)
   [junit4]   2at org.apache.lucene.search.IndexSearcher.search(IndexSe
archer.java:618)
   [junit4]   2at org.apache.lucene.search.AssertingIndexSearcher.searc
h(AssertingIndexSearcher.java:92)
   [junit4]   2at org.apache.lucene.search.IndexSearcher.search(IndexSe
archer.java:425)
   [junit4]   2at org.apache.lucene.bkdtree3d.TestGeo3DPointField$4._ru
n(TestGeo3DPointField.java:586)
   [junit4]   2at org.apache.lucene.bkdtree3d.TestGeo3DPointField$4.run
(TestGeo3DPointField.java:520)
   [junit4]   2
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPointField
-Dtests.method=testRandomTiny -Dtests.seed=D03EF31A709F9117 -Dtests.slow=true -D
tests.locale=bg -Dtests.timezone=Indian/Kerguelen -Dtests.asserts=true -Dtests.f
ile.encoding=Cp1252
   [junit4] ERROR   0.62s J0 | TestGeo3DPointField.testRandomTiny 
   [junit4] Throwable #1: com.carrotsearch.randomizedtesting.UncaughtExcept
ionError: Captured an uncaught exception in thread: Thread[id=17, name=T3, state
=RUNNABLE, group=TGRP-TestGeo3DPointField]
   [junit4]at __randomizedtesting.SeedInfo.seed([D03EF31A709F9117:9
9792D5C2EBEA9BB]:0)
   [junit4] Caused by: java.lang.AssertionError
   [junit4]at __randomizedtesting.SeedInfo.seed([D03EF31A709F9117]:
0)
   [junit4]at org.apache.lucene.bkdtree3d.PointInGeo3DShapeQuery$1.
scorer(PointInGeo3DShapeQuery.java:105)
   [junit4]at org.apache.lucene.search.LRUQueryCache$CachingWrapper
Weight.scorer(LRUQueryCache.java:581)
   [junit4]at org.apache.lucene.search.Weight.bulkScorer(Weight.jav
a:135)
   [junit4]at org.apache.lucene.search.AssertingWeight.bulkScorer(A
ssertingWeight.java:69)
   [junit4]at org.apache.lucene.search.AssertingWeight.bulkScorer(A
ssertingWeight.java:69)
   [junit4]at org.apache.lucene.search.IndexSearcher.search(IndexSe
archer.java:618)
   [junit4]at org.apache.lucene.search.AssertingIndexSearcher.searc
h(AssertingIndexSearcher.java:92)
   [junit4]at org.apache.lucene.search.IndexSearcher.search(IndexSe
archer.java:425)
   [junit4]at org.apache.lucene.bkdtree3d.TestGeo3DPointField$4._ru
n(TestGeo3DPointField.java:586)
   [junit4]at org.apache.lucene.bkdtree3d.TestGeo3DPointField$4.run
(TestGeo3DPointField.java:520)
   [junit4] IGNOR/A 0.02s J0 | TestGeo3DPointField.testRandomBig
   [junit4] Assumption #1: 'nightly' test group is disabled (@Nightly())
   [junit4]   2 NOTE: test params are: codec=Asserting(Lucene53): {}, docValues
:{}, sim=DefaultSimilarity, locale=bg, timezone=Indian/Kerguelen
   [junit4]   2 NOTE: Windows 7 6.1 amd64/Oracle Corporation 1.8.0_05 (64-bit)/
cpus=4,threads=1,free=171382464,total=245366784
   [junit4]   2 NOTE: All tests run in this JVM: [TestGeo3DPointField]
{code}

Stay tuned...

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, 

[jira] [Updated] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-19 Thread Scott Blum (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Blum updated SOLR-6760:
-
Attachment: SOLR-6760.patch

Final patch? DistributedQueueExt - OverseerCollectionQueue

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, 
 SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7836) Possible deadlock when closing refcounted index writers.

2015-08-19 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-7836:
-
Attachment: SOLR-7836-reorg.patch

This patch should be applied after SOLR-7836.patch if anyone is back-porting

Here's a new patch for comment that 
- puts the ulog writes back inside the IW blocks
- pulls out the problematic open searcher in ulog.add to a separate method.
- calls the extracted method from the two places that could call 
UpdateLog.add(cmd, true), which was the condition for opening a new searcher. 
The calls to the new method must be outside the IW block.
- removes the extra synchronized blocks on solrCoreState.getUpdatelock()
- changes the test to hit this condition harder as per Yonik.

It's possible that the CDCR code calls ulog.add with clearCaches==true, in 
which case the extracted method in ulog is called. Frankly I doubt that's a 
necessary thing, but seems harmless.

I don't particularly like decoupling the open searcher from the updatelog.add, 
but I like lockups even less. Not to mention possible tlog craziness. So I'll 
live with my dislike.

I think this addresses concerns about the tlog synchronization.

I ran this last night for 360 iterations, then made some trivial changes (yeah, 
right). I'll try some beasting on this today plus StressTestReorder, then do 
the usual precommit and full test. Assuming all that goes well I'll probably 
check this in tomorrow and call this done unless there are objections.

This, coupled with Yoniks changes for the NPE should put this to bed.

[~markrmil...@gmail.com] [~ysee...@gmail.com] all comments welcome of course.

 Possible deadlock when closing refcounted index writers.
 

 Key: SOLR-7836
 URL: https://issues.apache.org/jira/browse/SOLR-7836
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson
 Fix For: Trunk, 5.4

 Attachments: SOLR-7836-reorg.patch, SOLR-7836-synch.patch, 
 SOLR-7836.patch, SOLR-7836.patch, SOLR-7836.patch, deadlock_3.res.zip, 
 deadlock_5_pass_iw.res.zip, deadlock_test


 Preliminary patch for what looks like a possible race condition between 
 writerFree and pauseWriter in DefaultSorlCoreState.
 Looking for comments and/or why I'm completely missing the boat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6747) FingerprintFilter - a TokenFilter for clustering/linking purposes

2015-08-19 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703316#comment-14703316
 ] 

Adrien Grand commented on LUCENE-6747:
--

If you could tolerate that these fingerprints are not be reliable identifiers 
of your input, I'm wondering that we could make it more efficient by just using 
a hash function that doesn't depend on the order of its inputs?

Otherwise this looks rather good to me. Instead of taking the min offset and 
the max offset as offsets for the final token, I'm wondering that it might make 
more sense to use 0 and the final offset (the one returned after end() has been 
called) instead so that we don't treat token chars differently depending on 
whether they appear before/after the tokens or in the middle? By the way even 
with the current approach, we don't need to call Math.min/max: As tokens are 
supposed to be emitted in order, the start offset would be the start offset of 
the first token and the end offset would be the end offset of the last token.

 FingerprintFilter - a TokenFilter for clustering/linking purposes
 -

 Key: LUCENE-6747
 URL: https://issues.apache.org/jira/browse/LUCENE-6747
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/analysis
Reporter: Mark Harwood
Priority: Minor
 Attachments: fingerprintv1.patch


 A TokenFilter that emits a single token which is a sorted, de-duplicated set 
 of the input tokens.
 This approach to normalizing text is used in tools like OpenRefine[1] and 
 elsewhere [2] to help in clustering or linking texts.
 The implementation proposed here has a an upper limit on the size of the 
 combined token which is output.
 [1] https://github.com/OpenRefine/OpenRefine/wiki/Clustering-In-Depth
 [2] 
 https://rajmak.wordpress.com/2013/04/27/clustering-text-map-reduce-in-python/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.3-Java7 - Build # 22 - Failure

2015-08-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.3-Java7/22/

1 tests failed.
REGRESSION:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=2938, 
name=SocketProxy-Response-58174:45951, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2938, name=SocketProxy-Response-58174:45951, 
state=RUNNABLE, group=TGRP-HttpPartitionTest]
at 
__randomizedtesting.SeedInfo.seed([5DB8AC889CBFD532:D5EC93523243B8CA]:0)
Caused by: java.lang.RuntimeException: java.net.SocketException: Socket is 
closed
at __randomizedtesting.SeedInfo.seed([5DB8AC889CBFD532]:0)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:347)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1101)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:344)




Build Log:
[...truncated 10217 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2 Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.3-Java7/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_5DB8AC889CBFD532-001/init-core-data-001
   [junit4]   2 382700 INFO  
(SUITE-HttpPartitionTest-seed#[5DB8AC889CBFD532]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /o/bm
   [junit4]   2 382711 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2 382711 INFO  (Thread-909) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 382711 INFO  (Thread-909) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2 382811 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.ZkTestServer start zk server on port:54797
   [junit4]   2 382812 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 382830 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 382846 INFO  (zkCallback-422-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@1d827353 
name:ZooKeeperConnection Watcher:127.0.0.1:54797 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 382850 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 382850 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2 382850 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2 382853 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 382864 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 382867 INFO  (zkCallback-423-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@1adfe356 
name:ZooKeeperConnection Watcher:127.0.0.1:54797/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 382867 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 382867 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2 382867 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/collection1
   [junit4]   2 382875 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/collection1/shards
   [junit4]   2 382877 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/control_collection
   [junit4]   2 382878 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/control_collection/shards
   [junit4]   2 382879 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.3-Java7/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 382880 INFO  
(TEST-HttpPartitionTest.test-seed#[5DB8AC889CBFD532]) [] 
o.a.s.c.c.SolrZkClient makePath: /configs/conf1/solrconfig.xml
   

[jira] [Reopened] (SOLR-7836) Possible deadlock when closing refcounted index writers.

2015-08-19 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened SOLR-7836:
--

Improved patch for comment

 Possible deadlock when closing refcounted index writers.
 

 Key: SOLR-7836
 URL: https://issues.apache.org/jira/browse/SOLR-7836
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson
 Fix For: Trunk, 5.4

 Attachments: SOLR-7836-synch.patch, SOLR-7836.patch, SOLR-7836.patch, 
 SOLR-7836.patch, deadlock_3.res.zip, deadlock_5_pass_iw.res.zip, deadlock_test


 Preliminary patch for what looks like a possible race condition between 
 writerFree and pauseWriter in DefaultSorlCoreState.
 Looking for comments and/or why I'm completely missing the boat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-7836) Possible deadlock when closing refcounted index writers.

2015-08-19 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-7836:
-
Comment: was deleted

(was: Improved patch for comment)

 Possible deadlock when closing refcounted index writers.
 

 Key: SOLR-7836
 URL: https://issues.apache.org/jira/browse/SOLR-7836
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson
 Fix For: Trunk, 5.4

 Attachments: SOLR-7836-reorg.patch, SOLR-7836-synch.patch, 
 SOLR-7836.patch, SOLR-7836.patch, SOLR-7836.patch, deadlock_3.res.zip, 
 deadlock_5_pass_iw.res.zip, deadlock_test


 Preliminary patch for what looks like a possible race condition between 
 writerFree and pauseWriter in DefaultSorlCoreState.
 Looking for comments and/or why I'm completely missing the boat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703427#comment-14703427
 ] 

Karl Wright commented on LUCENE-6699:
-

Ok, I was able to isolate one case of failure and incorporate it into a simple 
test:

{code}
// Test case from BKD
c = new GeoCircle(PlanetModel.SPHERE, -0.765816119338, 0.991848766844, 
0.8153163226330487);
GeoPoint p1 = new GeoPoint(0.7692262265236023, -0.055089298115534646, 
-0.6365973465711254);
assertTrue(c.isWithin(p1));
xyzb = new XYZBounds();
c.getBounds(xyzb);
assertTrue(p1.x = xyzb.getMinimumX()  p1.x = xyzb.getMaximumX());
assertTrue(p1.y = xyzb.getMinimumY()  p1.y = xyzb.getMaximumY());
assertTrue(p1.z = xyzb.getMinimumZ()  p1.z = xyzb.getMaximumZ());

{code}

Now I can look at it. ;-)

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703646#comment-14703646
 ] 

Michael McCandless commented on LUCENE-6699:


bq. what's the minimum resolution that BKD expects to descend to?

BKD itself has no resolution limits: it descends to a region that has  N 
points, at which point it does a linear scan of those points checking if they 
match the shape.

But, the encoding we use is limited precision, using 32 bits for each of x, y, 
z (96 bits total), with range -1.002 to 1.002.  This means a point that goes 
in, using 3 doubles, will be quantized (pixelated).  However, the test takes 
this into account: when it's computing the expected value, it does the same 
pixelation that the doc values encoding did.

So, even miniscule circles should still work correctly?

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703648#comment-14703648
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1696657 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1696657 ]

LUCENE-6699: fix one bug, add fudge factors, add nocommits

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6743) Allow Ivy lockStrategy to be overridden by system property.

2015-08-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated LUCENE-6743:

Attachment: LUCENE-6743.patch

 Allow Ivy lockStrategy to be overridden by system property.
 ---

 Key: LUCENE-6743
 URL: https://issues.apache.org/jira/browse/LUCENE-6743
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6743.patch


 The current hard code lock strategy is imperfect and can fail under parallel 
 load. With Ivy 2.4 there is a better option in artifact-lock-nio. We should 
 allow the lock strategy to be overrideen like the resolutionCacheDir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703670#comment-14703670
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1696661 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1696661 ]

LUCENE-6699: another nocommit, show details on assert trip

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703672#comment-14703672
 ] 

Michael McCandless commented on LUCENE-6699:


I'm not a fan of the +/- 2.0 fudge factors :)

Because: without the opto, the shape relation checks of smaller and smaller XYZ 
solids as the BKD tree recurses, were working correctly?

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6738) remove IndexWriterConfig.[gs]etIndexingChain

2015-08-19 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved LUCENE-6738.
-
Resolution: Fixed

 remove IndexWriterConfig.[gs]etIndexingChain
 

 Key: LUCENE-6738
 URL: https://issues.apache.org/jira/browse/LUCENE-6738
 Project: Lucene - Core
  Issue Type: Task
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor
 Fix For: Trunk, 5.4


 github pull request with proposed code change to follow. see also LUCENE-6571 
 (which concerns Javadoc errors/warnings) for context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703695#comment-14703695
 ] 

Michael McCandless commented on LUCENE-6699:


OK I commented out the problematic assert and put a nocommit.

I also tested w/o the fudge factors and the test seems to be happy...

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703694#comment-14703694
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1696665 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1696665 ]

LUCENE-6699: comment out assert (nocommit) remove fudge factors

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703521#comment-14703521
 ] 

Karl Wright commented on LUCENE-6699:
-

[~mikemccand]: This second failure looks like it may be due to rounding error.  
The shape is a really tiny geocircle (radius 1.6e-5 radians).  I'll confirm 
this picture, but I have to ask: what's the minimum resolution that BKD expects 
to descend to?  because this is pretty small...



 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-19 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703549#comment-14703549
 ] 

Shalin Shekhar Mangar commented on SOLR-6760:
-

Thanks! Looks good to me. I'll commit this tomorrow morning my time.

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, 
 SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703652#comment-14703652
 ] 

Michael McCandless commented on LUCENE-6699:


[~daddywri], woops, I committed the patch before your most recent one...

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703675#comment-14703675
 ] 

Michael McCandless commented on LUCENE-6699:


Hmm, I'm seeing this test failure:

{noformat}
[junit4:pickseed] Seed property 'tests.seed' already defined: 27BE02D86F469AAF
   [junit4] JUnit4 says שלום! Master seed: 27BE02D86F469AAF
   [junit4] Executing 1 suite with 1 JVM.
   [junit4] 
   [junit4] Started J0 PID(14081@localhost).
   [junit4] Suite: org.apache.lucene.bkdtree3d.TestGeo3DPointField
   [junit4]   2 ?? 19, 2015 5:13:35 ?? 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2 WARNING: Uncaught exception in thread: 
Thread[T1,5,TGRP-TestGeo3DPointField]
   [junit4]   2 java.lang.AssertionError: got 0
   [junit4]   2at 
__randomizedtesting.SeedInfo.seed([27BE02D86F469AAF]:0)
   [junit4]   2at 
org.apache.lucene.bkdtree3d.PointInGeo3DShapeQuery$1.scorer(PointInGeo3DShapeQuery.java:105)
   [junit4]   2at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:589)
   [junit4]   2at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]   2at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
   [junit4]   2at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
   [junit4]   2at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)
   [junit4]   2at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:92)
   [junit4]   2at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:425)
   [junit4]   2at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4._run(TestGeo3DPointField.java:586)
   [junit4]   2at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4.run(TestGeo3DPointField.java:520)
   [junit4]   2 
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPointField 
-Dtests.method=testRandomMedium -Dtests.seed=27BE02D86F469AAF -Dtests.locale=zh 
-Dtests.timezone=Atlantic/Stanley -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
{noformat}

It's the assert that the XZYBounds relationship to the shape is either WITHIN 
or OVERLAPS, but in this case it's returning CONTAINS (shape CONTAINS the bbox).

Maybe the assert is too demanding?

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6738) remove IndexWriterConfig.[gs]etIndexingChain

2015-08-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703589#comment-14703589
 ] 

ASF subversion and git services commented on LUCENE-6738:
-

Commit 1696649 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1696649 ]

LUCENE-6738: remove IndexWriterConfig.[gs]etIndexingChain

 remove IndexWriterConfig.[gs]etIndexingChain
 

 Key: LUCENE-6738
 URL: https://issues.apache.org/jira/browse/LUCENE-6738
 Project: Lucene - Core
  Issue Type: Task
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor

 github pull request with proposed code change to follow. see also LUCENE-6571 
 (which concerns Javadoc errors/warnings) for context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6738) remove IndexWriterConfig.[gs]etIndexingChain

2015-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703656#comment-14703656
 ] 

ASF GitHub Bot commented on LUCENE-6738:


Github user cpoerschke closed the pull request at:

https://github.com/apache/lucene-solr/pull/199


 remove IndexWriterConfig.[gs]etIndexingChain
 

 Key: LUCENE-6738
 URL: https://issues.apache.org/jira/browse/LUCENE-6738
 Project: Lucene - Core
  Issue Type: Task
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor

 github pull request with proposed code change to follow. see also LUCENE-6571 
 (which concerns Javadoc errors/warnings) for context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6738) remove IndexWriterConfig.[gs]etIndexingChain

2015-08-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703655#comment-14703655
 ] 

ASF GitHub Bot commented on LUCENE-6738:


Github user cpoerschke commented on the pull request:

https://github.com/apache/lucene-solr/pull/199#issuecomment-132760976
  
committed to trunk and merged to branch_5x


 remove IndexWriterConfig.[gs]etIndexingChain
 

 Key: LUCENE-6738
 URL: https://issues.apache.org/jira/browse/LUCENE-6738
 Project: Lucene - Core
  Issue Type: Task
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor

 github pull request with proposed code change to follow. see also LUCENE-6571 
 (which concerns Javadoc errors/warnings) for context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: LUCENE-6738: remove IndexWriterConfig.[g...

2015-08-19 Thread cpoerschke
Github user cpoerschke commented on the pull request:

https://github.com/apache/lucene-solr/pull/199#issuecomment-132760976
  
committed to trunk and merged to branch_5x


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: LUCENE-6738: remove IndexWriterConfig.[g...

2015-08-19 Thread cpoerschke
Github user cpoerschke closed the pull request at:

https://github.com/apache/lucene-solr/pull/199


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6743) Allow Ivy lockStrategy to be overridden by system property.

2015-08-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703665#comment-14703665
 ] 

David Smiley commented on LUCENE-6743:
--

Shouldn't we simply upgrade to Ivy 2.4.0 and enable this by default?

 Allow Ivy lockStrategy to be overridden by system property.
 ---

 Key: LUCENE-6743
 URL: https://issues.apache.org/jira/browse/LUCENE-6743
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6743.patch


 The current hard code lock strategy is imperfect and can fail under parallel 
 load. With Ivy 2.4 there is a better option in artifact-lock-nio. We should 
 allow the lock strategy to be overrideen like the resolutionCacheDir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703543#comment-14703543
 ] 

Karl Wright edited comment on LUCENE-6699 at 8/19/15 6:44 PM:
--

Hmm, I couldn't reproduce this with a simple test.
Here's the failure detail:
{code}
   [junit4]   2 java.lang.AssertionError:
   Solid=XYZSolid: {planetmodel=PlanetModel.SPHERE, isWholeWorld=false, 
minXplane=[A=1.0, B=0.0, C=0.0, D=-0.778774751769, side=1.0], 
maxXplane=[A=1.0, B=0.0, C=0.0, D=-0.780900134368, side=-1.0], 
minYplane=[A=0.0, B=1.0, C=0.0, D=0.002943435994670142, side=1.0], 
maxYplane=[A=0.0, B=1.0, C=0.0, D=0.0029114063562165494, side=-1.0], 
minZplane=[A=0.0, B=0.0, C=1.0, D=0.005971010432932473, side=1.0], 
maxZplane=[A=0.0, B=0.0, C=1.0, D=0.005938981247250581, side=-1.0]};
   Shape=GeoCircle: {planetmodel=PlanetModel.SPHERE, 
center=[X=0.779838725235, Y=-0.0029274211758186968, 
Z=-0.0059549958440800015], radius=1.601488279374338E-5(9.175851934781766E-4)}
{code}

Here's the test code I created that passes:

{code}
c = new GeoCircle(PlanetModel.SPHERE, -0.00595503104063, -0.00292747726474, 
1.601488279374338E-5);
xyzb = new XYZBounds();
c.getBounds(xyzb);
GeoArea area = GeoAreaFactory.makeGeoArea(PlanetModel.SPHERE,
  xyzb.getMinimumX(), xyzb.getMaximumX(), xyzb.getMinimumY(), 
xyzb.getMaximumY(), xyzb.getMinimumZ(), xyzb.getMaximumZ());

int relationship = area.getRelationship(c);
assertTrue(relationship == GeoArea.WITHIN || relationship == 
GeoArea.OVERLAPS);
{code}

Here's the math I did to get there:

{code}
 Z=-0.0059549958440800015
 Y=-0.0029274211758186968
 X=0.779838725235
 print math.asin(Z)
-0.00595503104063
 print math.atan2(Y,X)
-0.00292747726474

{code}




was (Author: kwri...@metacarta.com):
Hmm, I couldn't reproduce this with a simple test.
Here's the failure detail:
{code}
   [junit4]   2 java.lang.AssertionError:
   Solid=XYZSolid: {planetmodel=PlanetModel.SPHERE, isWholeWorld=false, 
minXplane=[A=1.0, B=0.0, C=0.0, D=-0.778774751769, side=1.0], 
maxXplane=[A=1.0, B=0.0, C=0.0, D=-0.780900134368, side=-1.0], 
minYplane=[A=0.0, B=1.0, C=0.0, D=0.002943435994670142, side=1.0], 
maxYplane=[A=0.0, B=1.0, C=0.0, D=0.0029114063562165494, side=-1.0], 
minZplane=[A=0.0, B=0.0, C=1.0, D=0.005971010432932473, side=1.0], 
maxZplane=[A=0.0, B=0.0, C=1.0, D=0.005938981247250581, side=-1.0]};
   Shape=GeoCircle: {planetmodel=PlanetModel.SPHERE, 
center=[X=0.779838725235, Y=-0.0029274211758186968, 
Z=-0.0059549958440800015], radius=1.601488279374338E-5(9.175851934781766E-4)}
{code}

Here's the test code I created that passes:

{code}
c = new GeoCircle(PlanetModel.SPHERE, -0.00595503104063, -0.00292747726474, 
1.601488279374338E-5);
xyzb = new XYZBounds();
c.getBounds(xyzb);
GeoArea area = GeoAreaFactory.makeGeoArea(PlanetModel.SPHERE,
  xyzb.getMinimumX(), xyzb.getMaximumX(), xyzb.getMinimumY(), 
xyzb.getMaximumY(), xyzb.getMinimumZ(), xyzb.getMaximumZ());

int relationship = area.getRelationship(c);
assertTrue(relationship == GeoArea.WITHIN || relationship == 
GeoArea.OVERLAPS);
{code}


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by 

[jira] [Commented] (LUCENE-6743) Allow Ivy lockStrategy to be overridden by system property.

2015-08-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703569#comment-14703569
 ] 

Mark Miller commented on LUCENE-6743:
-

This can also improve general ivy issues - sometimes I will have to clear 
~/.ivy2 because ant targets hang on resolve because of artifact locks that are 
not cleaned up. Using artifact-lock-nio should remove this issue.

 Allow Ivy lockStrategy to be overridden by system property.
 ---

 Key: LUCENE-6743
 URL: https://issues.apache.org/jira/browse/LUCENE-6743
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: Trunk, 5.4


 The current hard code lock strategy is imperfect and can fail under parallel 
 load. With Ivy 2.4 there is a better option in artifact-lock-nio. We should 
 allow the lock strategy to be overrideen like the resolutionCacheDir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6699:

Attachment: LUCENE-6699.patch

[~mikemccand] After some debugging, I increased the value of MINIMUM_RESOLUTION 
to 5e-12.  This made all tests pass.

It appears that BKD really drills into potential precision issues in geo3d.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6743) Allow Ivy lockStrategy to be overridden by system property.

2015-08-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703650#comment-14703650
 ] 

Mark Miller commented on LUCENE-6743:
-

artifact-lock-nio is only in Ivy 2.4.0, so this leaves the default. You can use 
nio with build.properties though:

ivy.bootstrap.version=2.4.0
ivy_checksum_sha1=5abe4c24bbe992a9ac07ca563d5bd3e8d569e9ed
ivy.lock-strategy=artifact-lock-nio

 Allow Ivy lockStrategy to be overridden by system property.
 ---

 Key: LUCENE-6743
 URL: https://issues.apache.org/jira/browse/LUCENE-6743
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6743.patch


 The current hard code lock strategy is imperfect and can fail under parallel 
 load. With Ivy 2.4 there is a better option in artifact-lock-nio. We should 
 allow the lock strategy to be overrideen like the resolutionCacheDir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6738) remove IndexWriterConfig.[gs]etIndexingChain

2015-08-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703651#comment-14703651
 ] 

ASF subversion and git services commented on LUCENE-6738:
-

Commit 1696658 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1696658 ]

LUCENE-6738: remove IndexWriterConfig.[gs]etIndexingChain (revision 1696649 
from trunk)

 remove IndexWriterConfig.[gs]etIndexingChain
 

 Key: LUCENE-6738
 URL: https://issues.apache.org/jira/browse/LUCENE-6738
 Project: Lucene - Core
  Issue Type: Task
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor

 github pull request with proposed code change to follow. see also LUCENE-6571 
 (which concerns Javadoc errors/warnings) for context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703659#comment-14703659
 ] 

Michael McCandless commented on LUCENE-6699:


[~daddywri] OK I tried to apply your last patch, skipping dup parts from the 
previous already committed patch, and committed it.  Can you svn up and make 
sure it's right?  Thanks.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703658#comment-14703658
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1696659 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1696659 ]

LUCENE-6699: increase MINIMUM_RESOLUTION

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6738) remove IndexWriterConfig.[gs]etIndexingChain

2015-08-19 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-6738:

Fix Version/s: 5.4
   Trunk

 remove IndexWriterConfig.[gs]etIndexingChain
 

 Key: LUCENE-6738
 URL: https://issues.apache.org/jira/browse/LUCENE-6738
 Project: Lucene - Core
  Issue Type: Task
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor
 Fix For: Trunk, 5.4


 github pull request with proposed code change to follow. see also LUCENE-6571 
 (which concerns Javadoc errors/warnings) for context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703543#comment-14703543
 ] 

Karl Wright commented on LUCENE-6699:
-

Hmm, I couldn't reproduce this with a simple test.
Here's the failure detail:
{code}
   [junit4]   2 java.lang.AssertionError:
   Solid=XYZSolid: {planetmodel=PlanetModel.SPHERE, isWholeWorld=false, 
minXplane=[A=1.0, B=0.0, C=0.0, D=-0.778774751769, side=1.0], 
maxXplane=[A=1.0, B=0.0, C=0.0, D=-0.780900134368, side=-1.0], 
minYplane=[A=0.0, B=1.0, C=0.0, D=0.002943435994670142, side=1.0], 
maxYplane=[A=0.0, B=1.0, C=0.0, D=0.0029114063562165494, side=-1.0], 
minZplane=[A=0.0, B=0.0, C=1.0, D=0.005971010432932473, side=1.0], 
maxZplane=[A=0.0, B=0.0, C=1.0, D=0.005938981247250581, side=-1.0]};
   Shape=GeoCircle: {planetmodel=PlanetModel.SPHERE, 
center=[X=0.779838725235, Y=-0.0029274211758186968, 
Z=-0.0059549958440800015], radius=1.601488279374338E-5(9.175851934781766E-4)}
{code}

Here's the test code I created that passes:

{code}
c = new GeoCircle(PlanetModel.SPHERE, -0.00595503104063, -0.00292747726474, 
1.601488279374338E-5);
xyzb = new XYZBounds();
c.getBounds(xyzb);
GeoArea area = GeoAreaFactory.makeGeoArea(PlanetModel.SPHERE,
  xyzb.getMinimumX(), xyzb.getMaximumX(), xyzb.getMinimumY(), 
xyzb.getMaximumY(), xyzb.getMinimumZ(), xyzb.getMaximumZ());

int relationship = area.getRelationship(c);
assertTrue(relationship == GeoArea.WITHIN || relationship == 
GeoArea.OVERLAPS);
{code}


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7007) DistributedUpdateProcessor logs replay status as int instead of boolean

2015-08-19 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reassigned SOLR-7007:
-

Assignee: Christine Poerschke

 DistributedUpdateProcessor logs replay status as int instead of boolean
 ---

 Key: SOLR-7007
 URL: https://issues.apache.org/jira/browse/SOLR-7007
 Project: Solr
  Issue Type: Improvement
  Components: replication (java)
Affects Versions: 4.10.3
Reporter: Mike Drob
Assignee: Christine Poerschke
Priority: Trivial
  Labels: logging
 Attachments: SOLR-7007.patch


 When logging the following line:
 {code:title=DistributedUpdateProcessor.java}
 log.info(Ignoring commit while not ACTIVE - state:  + 
 ulog.getState() +  replay: + (cmd.getFlags()  UpdateCommand.REPLAY));
 {code}
 We display the value of the replay flag as an int instead of a boolean. This 
 can erroneously lead operators to believe that it is a counter instead of a 
 flag when all they see is the log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7560) Parallel SQL Support

2015-08-19 Thread Susheel Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703667#comment-14703667
 ] 

Susheel Kumar commented on SOLR-7560:
-

Hi,

I can help to test the Parallel SQL Support feature which is very useful for 
analytical purpose.  Can I get some info on setting up SQLHandler / some 
instructions to get started.

Thanks,
Susheel

 Parallel SQL Support
 

 Key: SOLR-7560
 URL: https://issues.apache.org/jira/browse/SOLR-7560
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, search
Reporter: Joel Bernstein
 Fix For: Trunk

 Attachments: SOLR-7560.calcite.patch, SOLR-7560.patch, 
 SOLR-7560.patch, SOLR-7560.patch, SOLR-7560.patch


 This ticket provides support for executing *Parallel SQL* queries across 
 SolrCloud collections. The SQL engine will be built on top of the Streaming 
 API (SOLR-7082), which provides support for *parallel relational algebra* and 
 *real-time map-reduce*.
 Basic design:
 1) A new SQLHandler will be added to process SQL requests. The SQL statements 
 will be compiled to live Streaming API objects for parallel execution across 
 SolrCloud worker nodes.
 2) SolrCloud collections will be abstracted as *Relational Tables*. 
 3) The Presto SQL parser will be used to parse the SQL statements.
 4) A JDBC thin client will be added as a Solrj client.
 This ticket will focus on putting the framework in place and providing basic 
 SELECT support and GROUP BY aggregate support.
 Future releases will build on this framework to provide additional SQL 
 features.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6571) Javadoc error when run in private access level

2015-08-19 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703726#comment-14703726
 ] 

Christine Poerschke commented on LUCENE-6571:
-

Removing setIndexingChain via LUCENE-6738 removed the 
LiveIndexWriterConfig.java:386 warning.

The FST.java warnings remain
{code}
FST.java:119: warning - Tag @see: can't find shouldExpand(Builder, 
UnCompiledNode) in org.apache.lucene.util.fst.FST
FST.java:114: warning - Tag @see: can't find shouldExpand(Builder, 
UnCompiledNode) in org.apache.lucene.util.fst.FST
FST.java:109: warning - Tag @see: can't find shouldExpand(Builder, 
UnCompiledNode) in org.apache.lucene.util.fst.FST
{code}
but
{code}
package org.apache.lucene.util.fst;
...
-import org.apache.lucene.util.fst.Builder.UnCompiledNode;
...
-   * @see #shouldExpand(Builder, UnCompiledNode)
+   * @see #shouldExpand(Builder, Builder.UnCompiledNode)
...
* @see Builder.UnCompiledNode#depth
*/
-  private boolean shouldExpand(BuilderT builder, UnCompiledNodeT node) {
+  private boolean shouldExpand(BuilderT builder, Builder.UnCompiledNodeT 
node) {
...
{code}
changes would make the warning go away.

 Javadoc error when run in private access level
 --

 Key: LUCENE-6571
 URL: https://issues.apache.org/jira/browse/LUCENE-6571
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.2
Reporter: Cao Manh Dat
Assignee: Christine Poerschke
Priority: Trivial
 Attachments: LUCENE-6571.patch


 Javadoc error when run in private access level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7560) Parallel SQL Support

2015-08-19 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703784#comment-14703784
 ] 

Joel Bernstein commented on SOLR-7560:
--

I will spend some time working on the documentation. So far the docs mostly 
covers how to form queries.

The link to docs is here: 
https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface

I'll post back to this ticket when I've added the sections on sending the 
query, configuration and the architecture.

If you want to get a jump on things the sample configs in trunk have the 
request handlers already setup. You can take a look at the test cases to see 
how to send a SQL query:

https://svn.apache.org/repos/asf/lucene/dev/trunk/solr/core/src/test/org/apache/solr/handler/TestSQLHandler.java






 Parallel SQL Support
 

 Key: SOLR-7560
 URL: https://issues.apache.org/jira/browse/SOLR-7560
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, search
Reporter: Joel Bernstein
 Fix For: Trunk

 Attachments: SOLR-7560.calcite.patch, SOLR-7560.patch, 
 SOLR-7560.patch, SOLR-7560.patch, SOLR-7560.patch


 This ticket provides support for executing *Parallel SQL* queries across 
 SolrCloud collections. The SQL engine will be built on top of the Streaming 
 API (SOLR-7082), which provides support for *parallel relational algebra* and 
 *real-time map-reduce*.
 Basic design:
 1) A new SQLHandler will be added to process SQL requests. The SQL statements 
 will be compiled to live Streaming API objects for parallel execution across 
 SolrCloud worker nodes.
 2) SolrCloud collections will be abstracted as *Relational Tables*. 
 3) The Presto SQL parser will be used to parse the SQL statements.
 4) A JDBC thin client will be added as a Solrj client.
 This ticket will focus on putting the framework in place and providing basic 
 SELECT support and GROUP BY aggregate support.
 Future releases will build on this framework to provide additional SQL 
 features.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-1975) SolrJ API does not include streaming uploaded data from Java Writer or OutputStream

2015-08-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-1975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-1975.
-
Resolution: Not A Problem

Closing after offline discussion with Lance. Please reopen if anyone really 
needs this :)

 SolrJ API does not include streaming uploaded data from Java Writer or 
 OutputStream
 ---

 Key: SOLR-1975
 URL: https://issues.apache.org/jira/browse/SOLR-1975
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Reporter: Lance Norskog
Priority: Minor

 The SolrJ API does not include the ability to upload data from a 
 java.io.Writer or java.io.OutputStream. To do this requires implementing the 
 ContentStream interface. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7746) Ping requests stopped working with distrib=true in Solr 5.2.1

2015-08-19 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703885#comment-14703885
 ] 

Michael Sun commented on SOLR-7746:
---

Yeah, that makes sense. Thanks [~gchanan] for suggestion.


 Ping requests stopped working with distrib=true in Solr 5.2.1
 -

 Key: SOLR-7746
 URL: https://issues.apache.org/jira/browse/SOLR-7746
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.2.1
Reporter: Alexey Serba
 Attachments: SOLR-7746.patch


 {noformat:title=steps to reproduce}
 # start 1 node SolrCloud cluster
 sh ./bin/solr -c -p 
 # create a test collection (we won’t use it, but I just want to it to load 
 solr configs to Zk)
 ./bin/solr create_collection -c test -d sample_techproducts_configs -p 
 # create another test collection with 2 shards
 curl 
 'http://localhost:/solr/admin/collections?action=CREATEname=test2numShards=2replicationFactor=1maxShardsPerNode=2collection.configName=test'
 # try distrib ping request
 curl 
 'http://localhost:/solr/test2/admin/ping?wt=jsondistrib=trueindent=true'
 ...
   error:{
 msg:Ping query caused exception: Error from server at 
 http://192.168.59.3:/solr/test2_shard2_replica1: Cannot execute the 
 PingRequestHandler recursively
 ...
 {noformat}
 {noformat:title=Exception}
 2116962 [qtp599601600-13] ERROR org.apache.solr.core.SolrCore  [test2 shard2 
 core_node1 test2_shard2_replica1] – org.apache.solr.common.SolrException: 
 Cannot execute the PingRequestHandler recursively
   at 
 org.apache.solr.handler.PingRequestHandler.handlePing(PingRequestHandler.java:246)
   at 
 org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:211)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703903#comment-14703903
 ] 

Karl Wright commented on LUCENE-6699:
-

Ok, I know what is going on, and it is indeed related to the WGS84 model.  But 
I have to think this through carefully.  The strategy used to compute all three 
bounds in XYZBound, and the latitude bound in LatLonBounds, is subtlely flawed, 
I think.  Working on this now.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703980#comment-14703980
 ] 

Karl Wright commented on LUCENE-6699:
-

In hopes of clarifying my own thinking, I'm going to lay down what the problem 
is.

For the Z bound, pretend that you are looking from the north (or south) pole 
onto the earth.  There's a plane whose intersection with the earth you are 
trying to compute the Z bounds for.  From the pole, the earth, no matter how 
oblate, is a circle in cross-section.  The plane intersects part of that 
circle.  And here's the important point: if you construct another plane that is 
perpendicular to the original plane, which also includes the Z axis, that plane 
must pass directly through the points that have the greatest Z extent for the 
intersection of the original plane and the earth.  Got that? ;-)

Anyway, for the X and Y bounds, I basically just copied that code and instead 
pretended I was looking up the X axis and up the Y axis, instead of the Z axis. 
 But if you have an oblate earth, then the cross section from either the X or 
the Y axis is not a circle, but rather an ellipse.  So a plane that is 
perpendicular to the original plane passing through (say) the X axis will NOT 
go through the points that have the greatest X extent for the intersection of 
the original plane and the earth.  The plane we want to use instead is parallel 
to that one, but offset by some amount, which I don't yet know how to compute.  
And that, in a nutshell, is the problem.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703979#comment-14703979
 ] 

Karl Wright commented on LUCENE-6699:
-

In hopes of clarifying my own thinking, I'm going to lay down what the problem 
is.

For the Z bound, pretend that you are looking from the north (or south) pole 
onto the earth.  There's a plane whose intersection with the earth you are 
trying to compute the Z bounds for.  From the pole, the earth, no matter how 
oblate, is a circle in cross-section.  The plane intersects part of that 
circle.  And here's the important point: if you construct another plane that is 
perpendicular to the original plane, which also includes the Z axis, that plane 
must pass directly through the points that have the greatest Z extent for the 
intersection of the original plane and the earth.  Got that? ;-)

Anyway, for the X and Y bounds, I basically just copied that code and instead 
pretended I was looking up the X axis and up the Y axis, instead of the Z axis. 
 But if you have an oblate earth, then the cross section from either the X or 
the Y axis is not a circle, but rather an ellipse.  So a plane that is 
perpendicular to the original plane passing through (say) the X axis will NOT 
go through the points that have the greatest X extent for the intersection of 
the original plane and the earth.  The plane we want to use instead is parallel 
to that one, but offset by some amount, which I don't yet know how to compute.  
And that, in a nutshell, is the problem.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703903#comment-14703903
 ] 

Karl Wright edited comment on LUCENE-6699 at 8/19/15 11:34 PM:
---

Ok, I know what is going on, and it is indeed related to the WGS84 model.  But 
I have to think this through carefully.  The strategy used to compute the X and 
Y bounds in XYZBound is subtlely flawed.  Working on this now.


was (Author: kwri...@metacarta.com):
Ok, I know what is going on, and it is indeed related to the WGS84 model.  But 
I have to think this through carefully.  The strategy used to compute all three 
bounds in XYZBound, and the latitude bound in LatLonBounds, is subtlely flawed, 
I think.  Working on this now.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.3 - Build # 10 - Still Failing

2015-08-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.3/10/

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space


REGRESSION:  org.apache.solr.cloud.RollingRestartTest.test

Error Message:
Unable to restart (#0): CloudJettyRunner 
[url=http://127.0.0.1:41463/collection1]

Stack Trace:
java.lang.AssertionError: Unable to restart (#0): CloudJettyRunner 
[url=http://127.0.0.1:41463/collection1]
at 
__randomizedtesting.SeedInfo.seed([19C2D7FD50A4B9B2:9196E827FE58D44A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.RollingRestartTest.restartWithRolesTest(RollingRestartTest.java:104)
at 
org.apache.solr.cloud.RollingRestartTest.test(RollingRestartTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703736#comment-14703736
 ] 

Michael McCandless commented on LUCENE-6699:


OK I will put the fudge factor back and put that explanation on top!

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-19 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14703847#comment-14703847
 ] 

Karl Wright commented on LUCENE-6699:
-

So, there are two problems.  One I just fixed, which was that pointOnSurface() 
was being too strict about accuracy.  But:

{code}
p1 = new GeoPoint(PlanetModel.WGS84, 0.006224927111830945, 
0.005597367237251763);
p2 = new GeoPoint(1.0010836083810235, 0.005603490759433942, 
0.006231850560862502);
assertTrue(PlanetModel.WGS84.pointOnSurface(p1));
assertTrue(PlanetModel.WGS84.pointOnSurface(p2)); // this fails
{code}

... so the packing accuracy is not high enough to guarantee that an XYZSolid 
constructed from bounds on your shape when applied to an unpacked point will be 
guaranteed to contain it.  That's problem 1, and it argues for adding some 
fudge factor to the bounds to account for the lack of full accuracy in the 
points.  More about that later.

Problem 2 is that even p1 above is not within the XYZSolid object computed from 
the bounds; in fact it's pretty far out (1e-7 or more).  I have to drill 
further into why that is; it's likely related to the WGS84 model, but I don't 
know how yet.  Once again, stay tuned.


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >