Exact searc on Field Query

2016-03-22 Thread SHANKAR REDDY
Team,
I have the below requirement using solr search with fq.

We have table A, with column COL1 which has the below values.

Table A
Col1

SANKARA REDDY TELUKUTLA
SANKARA TELUKUTLA
SANKARA

The requirement is when we search using fq on COL1, for SANKARA,  I would
like get exact match like SANKARA but not other records.

At this point, I see the fq=COL1='SANKARA' , which returns all the fields
starts with  SANKARA  i.e all the records.  But I want to get only SANKARA
( like exact match ).

Please let me know how I can achieve this.


Kind Regards,
Sankara Telukutla
MDM/ BIG DATA Solution Architech
-- 

Kind Regards,
Sankara Telukutla


Re: Solr cloud replication factort

2016-03-22 Thread Shawn Heisey
On 3/22/2016 9:28 PM, Raveendra Yerraguntla wrote:
>> I am using Solr 5.4 in solr cloud mode in a 8 node cluster. Used the

I didn't notice when I replied before that the message was on the dev
list.  This mailing list is for discussions about development of Lucene
and Solr.

For user questions about Solr, please post on the solr-user mailing list.

http://lucene.apache.org/solr/resources.html#mailing-lists

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr cloud replication factort

2016-03-22 Thread Shawn Heisey
On 3/22/2016 9:28 PM, Raveendra Yerraguntla wrote:
> I am using Solr 5.4 in solr cloud mode in a 8 node cluster. Used the
> replication factor of 1 for creating the index, then switched to
> replication factor > 1 for redundancy. With replication factor > 1,
> and tried to do indexing for incremental.  When the incremental
> indexing happens - getting a stack trace with the root cause pointing
> to write.lock is not available. Further analysis found that there is
> only one write.lock across all shards (leader and replicas). 

Unless you use the HDFS Directory implementation in Solr, the *only*
time replicationFactor has *any* effect is when you first create your
collection.  After that, it has *zero* effect -- unless you are using
HDFS and have configured it in a particular way.

To achieve redundancy with the normal Directory implementation (usually
NRTCachingDirectoryFactory), you will need to either create the
collection with a replicationFactor higher than 1, or you will need to
use the ADDREPLICA action on the Collections API to create more replicas
of your shards.

> But with replication factor of 1 , I could see write.lock across all
> nodes.
>
> is this the expected behavior (one write.lock) in the solr cloud with
> replication factor > 1. If so how can the indexing be done (even
> though it is slow) with distributed and redundant shards?

There are three major reasons for a problem with write.lock.
1) Solr is crashing and leaving the write.lock file behind.
2) You are trying to share an index directory between more than one core
or Solr instance.
3) You are trying to run with your index data on a network filesystem
like NFS.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_72) - Build # 16301 - Still Failing!

2016-03-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16301/
Java: 64bit/jdk1.8.0_72 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudPivotFacet.test

Error Message:
Captured an uncaught exception in thread: Thread[id=2934, name=SessionTracker, 
state=RUNNABLE, group=TGRP-TestCloudPivotFacet]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=2934, name=SessionTracker, state=RUNNABLE, 
group=TGRP-TestCloudPivotFacet]
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded




Build Log:
[...truncated 11373 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestCloudPivotFacet
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.TestCloudPivotFacet_39D34BE0B4D5-001/init-core-data-001
   [junit4]   2> 521843 INFO  
(SUITE-TestCloudPivotFacet-seed#[39D34BE0B4D5]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 521844 INFO  
(SUITE-TestCloudPivotFacet-seed#[39D34BE0B4D5]-worker) [] 
o.a.s.c.TestCloudPivotFacet init'ing useFieldRandomizedFactor = 10
   [junit4]   2> 521846 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 521846 INFO  (Thread-1062) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 521847 INFO  (Thread-1062) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 521946 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.ZkTestServer start zk server on port:33288
   [junit4]   2> 521946 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 521947 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 521949 INFO  (zkCallback-426-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@b5cdf26 name:ZooKeeperConnection 
Watcher:127.0.0.1:33288 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 521949 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 521949 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 521949 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 521950 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 521950 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 521951 INFO  (zkCallback-427-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@3674acae 
name:ZooKeeperConnection Watcher:127.0.0.1:33288/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 521951 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 521951 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 521951 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/collection1
   [junit4]   2> 521952 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/collection1/shards
   [junit4]   2> 521952 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/control_collection
   [junit4]   2> 521953 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/control_collection/shards
   [junit4]   2> 521953 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 521953 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.c.SolrZkClient makePath: /configs/conf1/solrconfig.xml
   [junit4]   2> 521954 INFO  
(TEST-TestCloudPivotFacet.test-seed#[39D34BE0B4D5]) [] 
o.a.s.c.AbstractZkTestCase put 

[jira] [Commented] (SOLR-8856) Do not cache merge or 'read once' contexts in the hdfs block cache.

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207828#comment-15207828
 ] 

ASF subversion and git services commented on SOLR-8856:
---

Commit 574da7667f571e0c9e0527b14e9dec14415200f6 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=574da76 ]

SOLR-8856: Do not cache merge or 'read once' contexts in the hdfs block cache.


> Do not cache merge or 'read once' contexts in the hdfs block cache.
> ---
>
> Key: SOLR-8856
> URL: https://issues.apache.org/jira/browse/SOLR-8856
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8856.patch, SOLR-8856.patch, SOLR-8856.patch
>
>
> Generally the block cache will not be large enough to contain the whole index 
> and merges can thrash the cache for queries. Even if we still look in the 
> cache, we should not populate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr cloud replication factort

2016-03-22 Thread Raveendra Yerraguntla
All,

I am using Solr 5.4 in solr cloud mode in a 8 node cluster. Used the
replication factor of 1 for creating the index, then switched to
replication factor > 1 for redundancy. With replication factor > 1, and
tried to do indexing for incremental.  When the incremental indexing
happens - getting a stack trace with the root cause pointing to write.lock
is not available. Further analysis found that there is only one write.lock
across all shards (leader and replicas).

But with replication factor of 1 , I could see write.lock across all nodes.

is this the expected behavior (one write.lock) in the solr cloud with
replication factor > 1. If so how can the indexing be done (even though it
is slow) with distributed and redundant shards?

OR

is there any config param that is missing to create write.lock across all
shards with replication factor > 1.

Appreciate your insights.


Thanks
Ravi


[jira] [Created] (SOLR-8886) TrieField.toObject(IndexableField) can't handle multiValued docValues

2016-03-22 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-8886:
--

 Summary: TrieField.toObject(IndexableField) can't handle 
multiValued docValues
 Key: SOLR-8886
 URL: https://issues.apache.org/jira/browse/SOLR-8886
 Project: Solr
  Issue Type: Bug
Reporter: Yonik Seeley


multiValued docValues numeric fields currently use SortedSet for some reason, 
but toObject throws an exception in that case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8865) real-time get does not retrieve values from docValues

2016-03-22 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207737#comment-15207737
 ] 

Yonik Seeley edited comment on SOLR-8865 at 3/23/16 2:11 AM:
-

Checkpoint patch... I updated schema_latest to make a bunch of the docValues 
fields unstored and used that.

Progress being made... currently an interesting fail in testRealTimeGet():
{code}
 expected ==={'doc':{'id':'1', a_f:-1.5, a_fd:-1.5, a_fdS:-1.5,  
a_fs:[1.0,2.5],a_fds:[1.0,2.5]   }}
 response = {
  "doc":
  {
"id":"1",
"a_f":-1.5,
"a_fd":-1.5,
"a_fdS":-1.5,
"a_fs":[1.0,
  2.5],
"a_fds":[1.0,
  "ERROR:SCHEMA-INDEX-MISMATCH,stringValue=null",
  2.5,
  "ERROR:SCHEMA-INDEX-MISMATCH,stringValue=null"]}}
{code}

This looks like a bug in TrieField.toObject() when using docValues


was (Author: ysee...@gmail.com):
Checkpoint patch... I updated schema_latest to make a bunch of the docValues 
fields unstored and used that.

Progress being made... currently an interesting fail in testRealTimeGet():
{code}
 expected ==={'doc':{'id':'1', a_f:-1.5, a_fd:-1.5, a_fdS:-1.5,  
a_fs:[1.0,2.5],a_fds:[1.0,2.5]   }}
 response = {
  "doc":
  {
"id":"1",
"a_f":-1.5,
"a_fd":-1.5,
"a_fdS":-1.5,
"a_fs":[1.0,
  2.5],
"a_fds":[1.0,
  "ERROR:SCHEMA-INDEX-MISMATCH,stringValue=null",
  2.5,
  "ERROR:SCHEMA-INDEX-MISMATCH,stringValue=null"]}}
{code}

> real-time get does not retrieve values from docValues
> -
>
> Key: SOLR-8865
> URL: https://issues.apache.org/jira/browse/SOLR-8865
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Fix For: 6.0
>
> Attachments: SOLR-8865.patch, SOLR-8865.patch, SOLR-8865.patch, 
> SOLR-8865.patch, SOLR-8865.patch
>
>
> Uncovered during ad-hoc testing... the _version_ field, which has 
> stored=false docValues=true is not retrieved with realtime-get



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8865) real-time get does not retrieve values from docValues

2016-03-22 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8865:
---
Attachment: SOLR-8865.patch

Checkpoint patch... I updated schema_latest to make a bunch of the docValues 
fields unstored and used that.

Progress being made... currently an interesting fail in testRealTimeGet():
{code}
 expected ==={'doc':{'id':'1', a_f:-1.5, a_fd:-1.5, a_fdS:-1.5,  
a_fs:[1.0,2.5],a_fds:[1.0,2.5]   }}
 response = {
  "doc":
  {
"id":"1",
"a_f":-1.5,
"a_fd":-1.5,
"a_fdS":-1.5,
"a_fs":[1.0,
  2.5],
"a_fds":[1.0,
  "ERROR:SCHEMA-INDEX-MISMATCH,stringValue=null",
  2.5,
  "ERROR:SCHEMA-INDEX-MISMATCH,stringValue=null"]}}
{code}

> real-time get does not retrieve values from docValues
> -
>
> Key: SOLR-8865
> URL: https://issues.apache.org/jira/browse/SOLR-8865
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Fix For: 6.0
>
> Attachments: SOLR-8865.patch, SOLR-8865.patch, SOLR-8865.patch, 
> SOLR-8865.patch, SOLR-8865.patch
>
>
> Uncovered during ad-hoc testing... the _version_ field, which has 
> stored=false docValues=true is not retrieved with realtime-get



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_72) - Build # 16300 - Failure!

2016-03-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16300/
Java: 32bit/jdk1.8.0_72 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteInactiveReplicaTest.deleteInactiveReplicaTest

Error Message:
Server refused connection at: https://127.0.0.1:35996/xwo

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Server refused connection at: 
https://127.0.0.1:35996/xwo
at 
__randomizedtesting.SeedInfo.seed([98317E775762D76:C4BD8C143AD75B94]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:584)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.DeleteInactiveReplicaTest.deleteInactiveReplicaTest(DeleteInactiveReplicaTest.java:132)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 471 - Still Failing!

2016-03-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/471/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

No tests ran.

Build Log:
[...truncated 11509 lines...]
FATAL: channel is already closed
hudson.remoting.ChannelClosedException: channel is already closed
at hudson.remoting.Channel.send(Channel.java:578)
at hudson.remoting.Request.call(Request.java:130)
at hudson.remoting.Channel.call(Channel.java:780)
at hudson.Launcher$RemoteLauncher.kill(Launcher.java:953)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:540)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: java.io.IOException: Unexpected termination of the channel
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50)
Caused by: java.io.EOFException
at 
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2325)
at 
java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2794)
at 
java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:801)
at java.io.ObjectInputStream.(ObjectInputStream.java:299)
at 
hudson.remoting.ObjectInputStreamEx.(ObjectInputStreamEx.java:48)
at 
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-master-Solaris #471
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-master-Solaris #471
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-master-Solaris #471
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-03-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207685#comment-15207685
 ] 

Joel Bernstein edited comment on SOLR-8593 at 3/23/16 1:29 AM:
---

Currently quoted identifiers do refer to columns. This was originally done 
because Presto didn't support mixed case columns unless they were quoted. But 
Presto fixed that problem. So the quoted identifiers as they are now don't 
really serve a purpose. But I do believe that both Presto and Calcite allow 
columns with quoted identifiers to support non parseable identifiers. 


was (Author: joel.bernstein):
Currently quoted identifiers do refer to columns. This was originally done 
because Presto didn't support mixed case columns unless they were quoted. But 
Presto fixed that problem. So the quoted identifiers as they are now don't 
really serve a purpose. But I do believe that both Presto and Calcite allow 
columns for quoted identifiers to support non parseable identifiers. 

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-03-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207685#comment-15207685
 ] 

Joel Bernstein commented on SOLR-8593:
--

Currently quoted identifiers do refer to columns. This was originally done 
because Presto didn't support mixed case columns unless they were quoted. But 
Presto fixed that problem. So the quoted identifiers as they are now don't 
really serve a purpose. But I do believe that both Presto and Calcite allow 
columns for quoted identifiers to support non parseable identifiers. 

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 26 - Failure!

2016-03-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/26/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'CY val modified' for path 'response/params/y/c' 
full output: {   "responseHeader":{ "status":0, "QTime":0},   
"response":{ "znodeVersion":0, "params":{"x":{ "a":"A val", 
"b":"B val", "":{"v":0}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val modified' for 
path 'response/params/y/c' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":0,
"params":{"x":{
"a":"A val",
"b":"B val",
"":{"v":0}
at 
__randomizedtesting.SeedInfo.seed([7030CED3E3FBE84C:F864F1094D0785B4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:200)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-6.x - Build # 99 - Failure

2016-03-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/99/

2 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val changed' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":"X val"}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val changed' for 
path 'x' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":"X val"}
at 
__randomizedtesting.SeedInfo.seed([9A24EA37840F7141:4269C76073D2D4E1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:255)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8865) real-time get does not retrieve values from docValues

2016-03-22 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207606#comment-15207606
 ] 

Yonik Seeley commented on SOLR-8865:


OK, I have some time to take a shot at this now...

> real-time get does not retrieve values from docValues
> -
>
> Key: SOLR-8865
> URL: https://issues.apache.org/jira/browse/SOLR-8865
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
> Fix For: 6.0
>
> Attachments: SOLR-8865.patch, SOLR-8865.patch, SOLR-8865.patch, 
> SOLR-8865.patch
>
>
> Uncovered during ad-hoc testing... the _version_ field, which has 
> stored=false docValues=true is not retrieved with realtime-get



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-03-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207593#comment-15207593
 ] 

Joel Bernstein edited comment on SOLR-8593 at 3/23/16 12:25 AM:


I looked through the code and I'm seeing how CloudSolrStream is being used. But 
it's not clear to me we'll be able to implement the full range of capabilities 
through this approach.

For example:

1) Can we choose between the FacetStream and a parallel RollupStream based on 
the costs of the different approaches?

2) Can we do parallel joins using Solr's shuffling capabilities and Solr 
workers?



was (Author: joel.bernstein):
I looked through the code and I'm seeing how CloudSolrStream is being used. But 
it's not clear to me we'll be able to implement that full range of capabilities 
through this approach.

For example:

1) Can we choose between the FacetStream and a parallel RollupStream based on 
the costs of the different approaches?

2) Can we do parallel joins using Solr's shuffling capabilities and Solr 
workers?


> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-03-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207593#comment-15207593
 ] 

Joel Bernstein commented on SOLR-8593:
--

I looked through the code and I'm seeing how CloudSolrStream is being used. But 
it's not clear to me we'll be able to implement that full range of capabilities 
through this approach.

For example:

1) Can we choose between the FacetStream and a parallel RollupStream based on 
the costs of the different approaches?

2) Can we do parallel joins using Solr's shuffling capabilities and Solr 
workers?


> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-03-22 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207510#comment-15207510
 ] 

Hrishikesh Gadre commented on SOLR-7374:


[~varunthacker] are you still working on this? If not, I would like to take a 
crack at it...

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 5.2, master
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3159 - Failure!

2016-03-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3159/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test

Error Message:
There are still nodes recoverying - waited for 45 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 45 
seconds
at 
__randomizedtesting.SeedInfo.seed([B4D56F5E1363318F:3C815084BD9F5C77]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:173)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test(DistribDocExpirationUpdateProcessorTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (LUCENE-7128) Fix spatial and sandbox geo APIs to consistently take lat before lon

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207469#comment-15207469
 ] 

ASF subversion and git services commented on LUCENE-7128:
-

Commit 137dd158fa5d4a1b1d6ad7cb369c69738f02401d in lucene-solr's branch 
refs/heads/branch_6_0 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=137dd15 ]

LUCENE-7128: fix a few more lon/lat places; remove more dead code


> Fix spatial and sandbox geo APIs to consistently take lat before lon
> 
>
> Key: LUCENE-7128
> URL: https://issues.apache.org/jira/browse/LUCENE-7128
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.0
>
> Attachments: LUCENE-7128.patch
>
>
> Right now sometimes it's lat, lon and other times it's lon, lat which
> is just asking for horrors of biblical proportions.
> I went through and carefully fixed them to take lat, lon in all places
> I could find, and renamed y -> lat and x -> lon.  I also removed
> unused code, or code only called from tests: I think Lucene shouldn't
> just export spatial APIs unless we also ourselves need them for
> indexing and searching.  Finally, I tried to shrink wrap the APIs,
> making previously public apis private if nobody external invoked them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8208) DocTransformer executes sub-queries

2016-03-22 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8208:
---
Attachment: SOLR-8208.patch

attaching a patch which pass existing tests.   now it avoids changes in 
DocStreamer and SolrQueryRequest. As a detail, subquery results are represents 
with , but not  
{code}



  0
  650


  
1
john
Director

  Engineering


  
These guys develop stuff



{code}

How do you feel about it? 

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: features, newbie
> Attachments: SOLR-8208.diff, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call them sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can allow to specify subquery parameter prefix:
> {code}
> ..=id,[subquery paramPrefix=subq1. 
> fromIndex=othercore],score,..={!term f=child_id 
> v=$subq1.row.id}=3=price&..
> {code}   
> * {{paramPrefix=subq1.}} shifts parameters for subquery: {{subq1.q}} turns to 
> {{q}} for subquery, {{subq1.rows}} to {{rows}}
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> * the itchiest one is to reference to document field from subquery 
> parameters, here I propose to use local param {{v}} and param deference 
> {{v=$param}} thus every document field implicitly introduces parameter for 
> subquery $\{paramPrefix\}row.$\{fieldName\}, thus above subquery is 
> q=child_id:, presumably we can drop "row." in the middle 
> (reducing to v=$subq1.id), until someone deal with {{rows}}, {{sort}} fields. 
> * \[subquery\], or \[query\], or ? 
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7122) BytesRefArray can be more efficient for fixed width values

2016-03-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207432#comment-15207432
 ] 

Michael McCandless commented on LUCENE-7122:


bq. Of course I wouldn't veto it.

OK thanks.  Of the two patches here, the first one at least seems the least 
controversial, even if it's not perfect ... progress not perfection.

bq. This is what happened to fst builder, for example 

Sigh :(

> BytesRefArray can be more efficient for fixed width values
> --
>
> Key: LUCENE-7122
> URL: https://issues.apache.org/jira/browse/LUCENE-7122
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7122.patch, LUCENE-7122.patch
>
>
> Today {{BytesRefArray}} uses one int ({{int[]}}, overallocated) per
> value to hold the length, but for dimensional points these values are
> always the same length. 
> This can save another 4 bytes of heap per indexed dimensional point,
> which is a big improvement (more points can fit in heap at once) for
> 1D and 2D lat/lon points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7128) Fix spatial and sandbox geo APIs to consistently take lat before lon

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207424#comment-15207424
 ] 

ASF subversion and git services commented on LUCENE-7128:
-

Commit f2234dccb35ac11e1028890b20b61cdd2c9b9bf7 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f2234dc ]

LUCENE-7128: fix a few more lon/lat places; remove more dead code


> Fix spatial and sandbox geo APIs to consistently take lat before lon
> 
>
> Key: LUCENE-7128
> URL: https://issues.apache.org/jira/browse/LUCENE-7128
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.0
>
> Attachments: LUCENE-7128.patch
>
>
> Right now sometimes it's lat, lon and other times it's lon, lat which
> is just asking for horrors of biblical proportions.
> I went through and carefully fixed them to take lat, lon in all places
> I could find, and renamed y -> lat and x -> lon.  I also removed
> unused code, or code only called from tests: I think Lucene shouldn't
> just export spatial APIs unless we also ourselves need them for
> indexing and searching.  Finally, I tried to shrink wrap the APIs,
> making previously public apis private if nobody external invoked them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7128) Fix spatial and sandbox geo APIs to consistently take lat before lon

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207422#comment-15207422
 ] 

ASF subversion and git services commented on LUCENE-7128:
-

Commit 99c3bb23710b22bdfb6908ea587b24308bf50ba9 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=99c3bb2 ]

LUCENE-7128: fix a few more lon/lat places; remove more dead code


> Fix spatial and sandbox geo APIs to consistently take lat before lon
> 
>
> Key: LUCENE-7128
> URL: https://issues.apache.org/jira/browse/LUCENE-7128
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.0
>
> Attachments: LUCENE-7128.patch
>
>
> Right now sometimes it's lat, lon and other times it's lon, lat which
> is just asking for horrors of biblical proportions.
> I went through and carefully fixed them to take lat, lon in all places
> I could find, and renamed y -> lat and x -> lon.  I also removed
> unused code, or code only called from tests: I think Lucene shouldn't
> just export spatial APIs unless we also ourselves need them for
> indexing and searching.  Finally, I tried to shrink wrap the APIs,
> making previously public apis private if nobody external invoked them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-jigsaw-ea+110) - Build # 210 - Failure!

2016-03-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/210/
Java: 32bit/jdk-9-jigsaw-ea+110 -server -XX:+UseSerialGC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'first' for path 
'response/params/x/_appends_/add' full output: {   "responseHeader":{ 
"status":0, "QTime":0},   "response":{ "znodeVersion":3, "params":{ 
  "x":{ "a":"A val", "b":"B val", "":{"v":0}},  
 "y":{ "p":"P val", "q":"Q val", "":{"v":2}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'first' for path 
'response/params/x/_appends_/add' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":3,
"params":{
  "x":{
"a":"A val",
"b":"B val",
"":{"v":0}},
  "y":{
"p":"P val",
"q":"Q val",
"":{"v":2}
at 
__randomizedtesting.SeedInfo.seed([CA4B3230B62A4612:421F0DEA18D62BEA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:236)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-03-22 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207416#comment-15207416
 ] 

Dennis Gove commented on SOLR-8593:
---

[~risdenk], I think I must be missing something in the diff link but I don't 
see any use of the JDBCStream. I'm not sure I understand how the use of the 
JDBCStream will help with this.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7130) fold optimizations from LatLonPoint to GeoPointField

2016-03-22 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-7130.
-
   Resolution: Fixed
Fix Version/s: 6.1
   master

> fold optimizations from LatLonPoint to GeoPointField
> 
>
> Key: LUCENE-7130
> URL: https://issues.apache.org/jira/browse/LUCENE-7130
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: master, 6.1
>
> Attachments: LUCENE-7130.patch
>
>
> Followup from LUCENE-7127:
> I had to remove some distance query optimizations for correctness. According 
> to benchmarks it hurt performance. We can win back half of it by just syncing 
> up with LatLonPoint's distance optimizations.
> Then separately from this, we can investigate trying to safely implement some 
> of what it was trying to do before (and add to both impls).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7130) fold optimizations from LatLonPoint to GeoPointField

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207392#comment-15207392
 ] 

ASF subversion and git services commented on LUCENE-7130:
-

Commit 7c4a40d50ad0436da9f97dbbb6ed5af0bf7971ab in lucene-solr's branch 
refs/heads/branch_6x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7c4a40d ]

LUCENE-7130: fold optimizations from LatLonPoint to GeoPointField


> fold optimizations from LatLonPoint to GeoPointField
> 
>
> Key: LUCENE-7130
> URL: https://issues.apache.org/jira/browse/LUCENE-7130
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: master, 6.1
>
> Attachments: LUCENE-7130.patch
>
>
> Followup from LUCENE-7127:
> I had to remove some distance query optimizations for correctness. According 
> to benchmarks it hurt performance. We can win back half of it by just syncing 
> up with LatLonPoint's distance optimizations.
> Then separately from this, we can investigate trying to safely implement some 
> of what it was trying to do before (and add to both impls).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7130) fold optimizations from LatLonPoint to GeoPointField

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207378#comment-15207378
 ] 

ASF subversion and git services commented on LUCENE-7130:
-

Commit 5385c8d92ffe69ab5cf76f4fd412e9880c6bfec1 in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5385c8d ]

LUCENE-7130: fold optimizations from LatLonPoint to GeoPointField


> fold optimizations from LatLonPoint to GeoPointField
> 
>
> Key: LUCENE-7130
> URL: https://issues.apache.org/jira/browse/LUCENE-7130
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-7130.patch
>
>
> Followup from LUCENE-7127:
> I had to remove some distance query optimizations for correctness. According 
> to benchmarks it hurt performance. We can win back half of it by just syncing 
> up with LatLonPoint's distance optimizations.
> Then separately from this, we can investigate trying to safely implement some 
> of what it was trying to do before (and add to both impls).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-03-22 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207366#comment-15207366
 ] 

Anshum Gupta commented on SOLR-8097:


We should not be using deprecated code for anything other than back-compat 
testing :)

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: master
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
>Priority: Minor
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-03-22 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207360#comment-15207360
 ] 

Jason Gerlowski commented on SOLR-8097:
---

I didn't forget, but I was unsure whether that should be included in this JIRA 
or not.  (I'm personally for changing the usage in production code, I was just 
being conservative.)

I'll upload a patch shortly removing use of the deprecated ctors entirely.

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: master
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
>Priority: Minor
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-03-22 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207350#comment-15207350
 ] 

Kevin Risden commented on SOLR-8593:


[~joel.bernstein] I was able to make some really good progress on the above 
linked piece. Its coming together just need to add the optimizations now.

I have been testing with https://github.com/risdenk/solr-calcite-example which 
is the same Calcite code isolated from Solr. It enables me to easily check the 
explain plan and data types. I'll probably bang on it some more tomorrow to 
integrate the push downs for project, filter, and sort. The hooks are all there 
just need to convert from the Cassandra example code to Solr syntax.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7729) ConcurrentUpdateSolrClient ignoring the collection parameter in some methods

2016-03-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-7729:
-

Assignee: Mark Miller

> ConcurrentUpdateSolrClient ignoring the collection parameter in some methods
> 
>
> Key: SOLR-7729
> URL: https://issues.apache.org/jira/browse/SOLR-7729
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.1
>Reporter: Jorge Luis Betancourt Gonzalez
>Assignee: Mark Miller
>  Labels: client, solrj
> Attachments: SOLR-7729-ConcurrentUpdateSolrClient-collection.patch
>
>
> Some of the methods in {{ConcurrentUpdateSolrClient}} accept an aditional 
> {{collection}} parameter, some of this methods are: {{add(String collection, 
> SolrInputDocument doc)}} and {{request(SolrRequest, String collection)}}. 
> This collection parameter is being ignored in this cases but works for others 
> like {{commit(String collection)}}.
> [~elyograg] noted that:
> {quote} 
> Looking into how an update request actually gets added to the background
> queue in ConcurrentUpdateSolrClient, it appears that the "collection"
> information is ignored before the request is added to the queue.
> {quote}
> From the source, when a commit is issued or the 
> {{UpdateParams.WAIT_SEARCHER}} is set in the request params the collection 
> parameter is used, otherwise the request {{UpdateRequest req}} is queued 
> without any regarding of the collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7127) remove epsilon-based testing from lucene/spatial

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207326#comment-15207326
 ] 

ASF subversion and git services commented on LUCENE-7127:
-

Commit ee1aca86435e501d02c23a93f480f076a8b72f34 in lucene-solr's branch 
refs/heads/branch_6x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ee1aca8 ]

LUCENE-7127: remove epsilon-based testing from lucene/spatial, fix distance 
bugs.


> remove epsilon-based testing from lucene/spatial
> 
>
> Key: LUCENE-7127
> URL: https://issues.apache.org/jira/browse/LUCENE-7127
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Fix For: master, 6.1
>
> Attachments: LUCENE-7127.patch, LUCENE-7127.patch, LUCENE-7127.patch, 
> LUCENE-7127.patch
>
>
> Currently, the random tests here allow a TOLERANCE and will fail if the error 
> exceeds. But this is not fun to debug! It also keeps the door wide open for 
> bugs to creep in.
> Alternatively, we can rework the tests like we did for sandbox/ points. This 
> means the test is aware of the index-time quantization and so it can demand 
> exact answers.
> Its more difficult at first, because even floating point error can cause a 
> failure. It requires us to maybe work through corner cases/rework 
> optimizations. If any epsilons must be added, they can be added to the 
> optimizations themselves (e.g. bounding box) instead of the user's result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7127) remove epsilon-based testing from lucene/spatial

2016-03-22 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-7127.
-
   Resolution: Fixed
Fix Version/s: 6.1
   master

> remove epsilon-based testing from lucene/spatial
> 
>
> Key: LUCENE-7127
> URL: https://issues.apache.org/jira/browse/LUCENE-7127
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Fix For: master, 6.1
>
> Attachments: LUCENE-7127.patch, LUCENE-7127.patch, LUCENE-7127.patch, 
> LUCENE-7127.patch
>
>
> Currently, the random tests here allow a TOLERANCE and will fail if the error 
> exceeds. But this is not fun to debug! It also keeps the door wide open for 
> bugs to creep in.
> Alternatively, we can rework the tests like we did for sandbox/ points. This 
> means the test is aware of the index-time quantization and so it can demand 
> exact answers.
> Its more difficult at first, because even floating point error can cause a 
> failure. It requires us to maybe work through corner cases/rework 
> optimizations. If any epsilons must be added, they can be added to the 
> optimizations themselves (e.g. bounding box) instead of the user's result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-03-22 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207314#comment-15207314
 ] 

Anshum Gupta commented on SOLR-8097:


Seems like you forgot to change the usage of deprecated constructors in the 
code. You changed it in the tests, but not the rest of the code base.
The rest seems great.

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: master
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
>Priority: Minor
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8885) Update JS lib versions

2016-03-22 Thread Mike Drob (JIRA)
Mike Drob created SOLR-8885:
---

 Summary: Update JS lib versions
 Key: SOLR-8885
 URL: https://issues.apache.org/jira/browse/SOLR-8885
 Project: Solr
  Issue Type: Bug
  Components: UI
Reporter: Mike Drob


We are currently using some old versions of various JS libraries. Here are the 
ones I could track down:

||Library||Our Version||Latest||Notes||
|d3.js|2.8.1|2.10.33.5.16|https://github.com/mbostock/d3/wiki/Upgrading-to-3.0|
|jQuery|1.7.2|1.11.0/2.1.0| |
|jQuery BlockUI|2.39.0|2.70.0| |
|jQuery Sammy|0.6.2|0.7.4||
|ZeroClipboard|1.0.7|1.3.5/2.2.0||
|jQuery TimeAgo|0.9.3|1.5.2||
|jQuery Form|2.47|3.51||

Some of these might break compatibility, so they are probably not drop in 
replacements, but I think we can get improved speed on the UI from a lot of 
these updated versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8885) Update JS lib versions

2016-03-22 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-8885:

Description: 
We are currently using some old versions of various JS libraries. Here are the 
ones I could track down:

||Library||Our Version||Latest||Notes||
|d3.js|2.8.1|2.10.3/3.5.16|https://github.com/mbostock/d3/wiki/Upgrading-to-3.0|
|jQuery|1.7.2|1.11.0/2.1.0| |
|jQuery BlockUI|2.39.0|2.70.0| |
|jQuery Sammy|0.6.2|0.7.4| |
|ZeroClipboard|1.0.7|1.3.5/2.2.0| |
|jQuery TimeAgo|0.9.3|1.5.2| |
|jQuery Form|2.47|3.51| |

Some of these might break compatibility, so they are probably not drop in 
replacements, but I think we can get improved speed on the UI from a lot of 
these updated versions.

  was:
We are currently using some old versions of various JS libraries. Here are the 
ones I could track down:

||Library||Our Version||Latest||Notes||
|d3.js|2.8.1|2.10.33.5.16|https://github.com/mbostock/d3/wiki/Upgrading-to-3.0|
|jQuery|1.7.2|1.11.0/2.1.0| |
|jQuery BlockUI|2.39.0|2.70.0| |
|jQuery Sammy|0.6.2|0.7.4||
|ZeroClipboard|1.0.7|1.3.5/2.2.0||
|jQuery TimeAgo|0.9.3|1.5.2||
|jQuery Form|2.47|3.51||

Some of these might break compatibility, so they are probably not drop in 
replacements, but I think we can get improved speed on the UI from a lot of 
these updated versions.


> Update JS lib versions
> --
>
> Key: SOLR-8885
> URL: https://issues.apache.org/jira/browse/SOLR-8885
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Reporter: Mike Drob
>
> We are currently using some old versions of various JS libraries. Here are 
> the ones I could track down:
> ||Library||Our Version||Latest||Notes||
> |d3.js|2.8.1|2.10.3/3.5.16|https://github.com/mbostock/d3/wiki/Upgrading-to-3.0|
> |jQuery|1.7.2|1.11.0/2.1.0| |
> |jQuery BlockUI|2.39.0|2.70.0| |
> |jQuery Sammy|0.6.2|0.7.4| |
> |ZeroClipboard|1.0.7|1.3.5/2.2.0| |
> |jQuery TimeAgo|0.9.3|1.5.2| |
> |jQuery Form|2.47|3.51| |
> Some of these might break compatibility, so they are probably not drop in 
> replacements, but I think we can get improved speed on the UI from a lot of 
> these updated versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7130) fold optimizations from LatLonPoint to GeoPointField

2016-03-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207242#comment-15207242
 ] 

Michael McCandless commented on LUCENE-7130:


+1

> fold optimizations from LatLonPoint to GeoPointField
> 
>
> Key: LUCENE-7130
> URL: https://issues.apache.org/jira/browse/LUCENE-7130
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-7130.patch
>
>
> Followup from LUCENE-7127:
> I had to remove some distance query optimizations for correctness. According 
> to benchmarks it hurt performance. We can win back half of it by just syncing 
> up with LatLonPoint's distance optimizations.
> Then separately from this, we can investigate trying to safely implement some 
> of what it was trying to do before (and add to both impls).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8855) The HDFS BlockDirectory should not clean up it's cache on shutdown.

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207243#comment-15207243
 ] 

ASF subversion and git services commented on SOLR-8855:
---

Commit 9aeb745a7daf84a8365e3d823ea314d9d371ae9b in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9aeb745 ]

SOLR-8855: The HDFS BlockDirectory should not clean up it's cache on shutdown.


> The HDFS BlockDirectory should not clean up it's cache on shutdown.
> ---
>
> Key: SOLR-8855
> URL: https://issues.apache.org/jira/browse/SOLR-8855
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master, 6.1
>
> Attachments: SOLR-8855.patch
>
>
> The cache cleanup is done for early close and the global cache. On shutdown 
> it just burns time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7130) fold optimizations from LatLonPoint to GeoPointField

2016-03-22 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-7130:

Attachment: LUCENE-7130.patch

simple patch: just adds bounds check and uses haversinSortKey. I also cleaned 
up how the termsEnum interacts with crosses()/within().

> fold optimizations from LatLonPoint to GeoPointField
> 
>
> Key: LUCENE-7130
> URL: https://issues.apache.org/jira/browse/LUCENE-7130
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-7130.patch
>
>
> Followup from LUCENE-7127:
> I had to remove some distance query optimizations for correctness. According 
> to benchmarks it hurt performance. We can win back half of it by just syncing 
> up with LatLonPoint's distance optimizations.
> Then separately from this, we can investigate trying to safely implement some 
> of what it was trying to do before (and add to both impls).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7130) fold optimizations from LatLonPoint to GeoPointField

2016-03-22 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-7130:
---

 Summary: fold optimizations from LatLonPoint to GeoPointField
 Key: LUCENE-7130
 URL: https://issues.apache.org/jira/browse/LUCENE-7130
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir


Followup from LUCENE-7127:

I had to remove some distance query optimizations for correctness. According to 
benchmarks it hurt performance. We can win back half of it by just syncing up 
with LatLonPoint's distance optimizations.

Then separately from this, we can investigate trying to safely implement some 
of what it was trying to do before (and add to both impls).





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6993) Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all JFlex-based tokenizers to support Unicode 8.0

2016-03-22 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207199#comment-15207199
 ] 

Mike Drob commented on LUCENE-6993:
---

Any updates here? I'm not sure if there is anything I need to be doing to keep 
this patch up to date.

> Update UAX29URLEmailTokenizer TLDs to latest list, and upgrade all 
> JFlex-based tokenizers to support Unicode 8.0
> 
>
> Key: LUCENE-6993
> URL: https://issues.apache.org/jira/browse/LUCENE-6993
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Mike Drob
>Assignee: Robert Muir
> Fix For: 6.0
>
> Attachments: LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, LUCENE-6993.patch, 
> LUCENE-6993.patch
>
>
> We did this once before in LUCENE-5357, but it might be time to update the 
> list of TLDs again. Comparing our old list with a new list indicates 800+ new 
> domains, so it would be nice to include them.
> Also the JFlex tokenizer grammars should be upgraded to support Unicode 8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1035 - Failure

2016-03-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1035/

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":null}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val' for path 'x' 
full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":null}
at 
__randomizedtesting.SeedInfo.seed([15958F51CFE0334F:CDD8A206383D96EF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:238)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2016-03-22 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207116#comment-15207116
 ] 

Noble Paul commented on SOLR-8327:
--

you are right. It is not in the scope of {{ZkStateReader}}

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2016-03-22 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207115#comment-15207115
 ] 

Noble Paul commented on SOLR-8327:
--

Smart caching work flow is as follows

# a request for a collection comes to solrj
# look up in local cache . If not available read from ZK and populate local 
cache
# make a request to the server optimistically assuming that the data in cache 
is latest. But send extra information as a request parameter 
(\_stateVer_=:)
# at server. if this parameter is present, check locally if the version is 
correct . If this node serves this collection, it always has the latest state. 
So no ZK lookup is necessary
# If the version at server is newer send the latest version of the state as a 
part of the payload
# solrj looks for this  extra info in the payload. If there is no extra info, 
the state it has is the latest. nothing needs to be done. If the payload 
contains versions of the state, it means that the local version is stale, 
invalidate the cache


> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the 

[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2016-03-22 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207112#comment-15207112
 ] 

Scott Blum commented on SOLR-8327:
--

Thanks for the extra context.  So it looks like this falls outside the scope of 
what ZkStateReader wants to do, since you want context-sensitive caching.

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2016-03-22 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207085#comment-15207085
 ] 

Noble Paul commented on SOLR-8327:
--

[~dragonsinth] Please refer to SOLR-5474 & SOLR-7130

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2016-03-22 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207083#comment-15207083
 ] 

Varun Thacker commented on SOLR-8327:
-

Hi Scott,

This is what I understand what CloudSolrClient does for caching -

There is a in memory map {[Map 
collectionStateCache}} . Now on every request it attaches the state versions 
from this cached map ( the collections that were referenced in the request )  
by passing the {{STATE_VERSION}} param. This code is in the 
{{CloudSolrClient#requestWithRetryOnStaleState}} method.

The server in the response will send back state versions if it does not match 
the one sent by the client. So the client will update it's local copy based on 
this information. 

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: 

[jira] [Commented] (SOLR-7090) Cross collection join

2016-03-22 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207079#comment-15207079
 ] 

Scott Blum commented on SOLR-7090:
--

Makes sense to me.  BTW, the patch posted is not meant to be "This is what I 
think we should do."  It was more a provision exploration of the possibilities.

> Cross collection join
> -
>
> Key: SOLR-7090
> URL: https://issues.apache.org/jira/browse/SOLR-7090
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Fix For: 5.2, master
>
> Attachments: SOLR-7090-fulljoin.patch, SOLR-7090.patch
>
>
> Although SOLR-4905 supports joins across collections in Cloud mode, there are 
> limitations, (i) the secondary collection must be replicated at each node 
> where the primary collection has a replica, (ii) the secondary collection 
> must be singly sharded.
> This issue explores ideas/possibilities of cross collection joins, even 
> across nodes. This will be helpful for users who wish to maintain boosts or 
> signals in a secondary, more frequently updated collection, and perform query 
> time join of these boosts/signals with results from the primary collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2016-03-22 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207072#comment-15207072
 ] 

Scott Blum commented on SOLR-8327:
--

What does solrj do? How do you know when the cached data is stale?

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-jigsaw-ea+110) - Build # 16297 - Failure!

2016-03-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16297/
Java: 32bit/jdk-9-jigsaw-ea+110 -server -XX:+UseSerialGC -XX:-CompactStrings

All tests passed

Build Log:
[...truncated 11999 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp/junit4-J1-20160322_182718_971.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0xf6c36ab2, pid=5377, tid=5395
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (9.0+110) (build 
9-ea+110-2016-03-17-023140.javare.4664.nc)
   [junit4] # Java VM: Java HotSpot(TM) Server VM 
(9-ea+110-2016-03-17-023140.javare.4664.nc, mixed mode, tiered, serial gc, 
linux-x86)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x48cab2]  FastScanClosure::do_oop(oopDesc**)+0x22
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/hs_err_pid5377.log
   [junit4] Compiled method (c2) 1409328 33448   !   4   
org.apache.solr.update.SolrCmdDistributor::submit (301 bytes)
   [junit4]  total in heap  [0xf1420f08,0xf1440584] = 128636
   [junit4]  relocation [0xf1420fe0,0xf1421f40] = 3936
   [junit4]  main code  [0xf1421f40,0xf14295c0] = 30336
   [junit4]  stub code  [0xf14295c0,0xf1429c08] = 1608
   [junit4]  oops   [0xf1429c08,0xf1429c50] = 72
   [junit4]  metadata   [0xf1429c50,0xf1429f80] = 816
   [junit4]  scopes data[0xf1429f80,0xf143cb44] = 76740
   [junit4]  scopes pcs [0xf143cb44,0xf143e9e4] = 7840
   [junit4]  dependencies   [0xf143e9e4,0xf143ebe8] = 516
   [junit4]  handler table  [0xf143ebe8,0xf14402c8] = 5856
   [junit4]  nul chk table  [0xf14402c8,0xf1440584] = 700
   [junit4] Compiled method (c2) 1409328 33448   !   4   
org.apache.solr.update.SolrCmdDistributor::submit (301 bytes)
   [junit4]  total in heap  [0xf1420f08,0xf1440584] = 128636
   [junit4]  relocation [0xf1420fe0,0xf1421f40] = 3936
   [junit4]  main code  [0xf1421f40,0xf14295c0] = 30336
   [junit4]  stub code  [0xf14295c0,0xf1429c08] = 1608
   [junit4]  oops   [0xf1429c08,0xf1429c50] = 72
   [junit4]  metadata   [0xf1429c50,0xf1429f80] = 816
   [junit4]  scopes data[0xf1429f80,0xf143cb44] = 76740
   [junit4]  scopes pcs [0xf143cb44,0xf143e9e4] = 7840
   [junit4]  dependencies   [0xf143e9e4,0xf143ebe8] = 516
   [junit4]  handler table  [0xf143ebe8,0xf14402c8] = 5856
   [junit4]  nul chk table  [0xf14402c8,0xf1440584] = 700
   [junit4] #
   [junit4] # If you would like to submit a bug report, p
   [junit4] lease visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J1: EOF 

[...truncated 269 lines...]
   [junit4] JVM J2: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp/junit4-J2-20160322_182718_971.sysout
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0xf6babab2, pid=5376, tid=5394
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (9.0+110) (build 
9-ea+110-2016-03-17-023140.javare.4664.nc)
   [junit4] # Java VM: Java HotSpot(TM) Server VM 
(9-ea+110-2016-03-17-023140.javare.4664.nc, mixed mode, tiered, serial gc, 
linux-x86)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x48cab2]  FastScanClosure::do_oop(oopDesc**)+0x22
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/hs_err_pid5376.log
   [junit4] Compiled method (c2) 2084126 38637   !   4   
org.apache.solr.handler.component.HttpShardHandler$1::call (434 bytes)
   [junit4]  total in heap  [0xf1a10f88,0xf1a2f0c8] = 123200
   [junit4]  relocation [0xf1a11060,0xf1a11d68] = 3336
   [junit4]  main code  [0xf1a11d80,0xf1a19720] = 31136
   [junit4]  stub code  [0xf1a19720,0xf1a19c1c] = 1276
   [junit4]  oops   [0xf1a19c1c,0xf1a19c40] = 36
   [junit4]  metadata   [0xf1a19c40,0xf1a19ebc] = 636
   [junit4]  scopes data[0xf1a19ebc,0xf1a2c07c] = 74176
   [junit4]  scopes pcs [0xf1a2c07c,0xf1a2dc2c] = 7088
   [junit4]  dependencies   [0xf1a2dc2c,0xf1a2ddbc] = 400
   [junit4]  handler table  

[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2016-03-22 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207038#comment-15207038
 ] 

Noble Paul commented on SOLR-8327:
--

I don't think adding a watch for lazy collection ref is a good idea. We needed 
to minimize the number of watches for achieving scalability. The solution 
employed in solrj is very efficient. We can replicate the same logic here as 
well

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2016-03-22 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207019#comment-15207019
 ] 

Scott Blum commented on SOLR-8327:
--

There's a TODO in LazyCollectionRef:

// TODO: consider limited caching

I think the right answer here is that LazyCollectionRef would need to keep the 
last version fetched in memory, and probably set a watch in case the data 
changes.  The watch would simply invalidate the cache without refetching the 
data.  Thoughts?  I could take this one.

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8884) fl=score returns a different value than that of Explain's

2016-03-22 Thread Ahmet Arslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Arslan updated SOLR-8884:
---
Attachment: SOLR-8884.patch

Randomized test case for Lucene in hopes that it will trigger sometime. Will 
try to write Solr counterpart.

> fl=score returns a different value than that of Explain's
> -
>
> Key: SOLR-8884
> URL: https://issues.apache.org/jira/browse/SOLR-8884
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 5.5
>Reporter: Ahmet Arslan
> Attachments: SOLR-8884.patch, debug.xml
>
>
> Some of the folks 
> [reported|http://find.searchhub.org/document/80666f5c3b86ddda] that sometimes 
> explain's score can be different than the score requested by fields 
> parameter. Interestingly, Explain's scores would create a different ranking 
> than the original result list. This is something users experience, but it 
> cannot be re-produced deterministically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7127) remove epsilon-based testing from lucene/spatial

2016-03-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206929#comment-15206929
 ] 

Michael McCandless commented on LUCENE-7127:


+1, this is a great simplification, and I love that the two queries (points and 
postings) use the same approach now.

+1 to defer optimizing, threads removal, etc.

> remove epsilon-based testing from lucene/spatial
> 
>
> Key: LUCENE-7127
> URL: https://issues.apache.org/jira/browse/LUCENE-7127
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Attachments: LUCENE-7127.patch, LUCENE-7127.patch, LUCENE-7127.patch, 
> LUCENE-7127.patch
>
>
> Currently, the random tests here allow a TOLERANCE and will fail if the error 
> exceeds. But this is not fun to debug! It also keeps the door wide open for 
> bugs to creep in.
> Alternatively, we can rework the tests like we did for sandbox/ points. This 
> means the test is aware of the index-time quantization and so it can demand 
> exact answers.
> Its more difficult at first, because even floating point error can cause a 
> failure. It requires us to maybe work through corner cases/rework 
> optimizations. If any epsilons must be added, they can be added to the 
> optimizations themselves (e.g. bounding box) instead of the user's result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-03-22 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206919#comment-15206919
 ] 

Kevin Risden edited comment on SOLR-8593 at 3/22/16 6:08 PM:
-

Here is a separate approach that uses all of Calcite and the JDBCStream: 
https://github.com/risdenk/lucene-solr/compare/master...risdenk:calcite-sql-handler

It removes all the custom processing from SQLHandler, wraps Calcite in a 
JDBCStream, and executes the query.

There is something I learned about TestSQLHandler that I'm not sure is correct:
* quoted identifiers - like 'id' and 'text' aren't valid? These shouldn't be 
referring to columns?

Things to be explored with this approach:
* switch from a standard query in SolrEnumerator to a stream
* fix data types
* optimize cases like where
* code cleanup since it was just thrown together as a POC


was (Author: risdenk):
Here is a separate approach that uses all of Calcite and the JDBCStream: 
https://github.com/risdenk/lucene-solr/compare/master...risdenk:calcite-sql-handler

It removes all the custom processing from SQLHandler, wraps Calcite in a 
JDBCStream, and executes the query.

There is something I learned about TestSQLHandler that I'm not sure is correct:
* quoted identifiers - like 'id' and 'text' aren't valid? These shouldn't be 
referring to columns?

Things to be explored with this approach:
* switch from a standard query in SolrEnumerator to a stream
* fix data types
* optimize cases like where

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-03-22 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206919#comment-15206919
 ] 

Kevin Risden commented on SOLR-8593:


Here is a separate approach that uses all of Calcite and the JDBCStream: 
https://github.com/risdenk/lucene-solr/compare/master...risdenk:calcite-sql-handler

It removes all the custom processing from SQLHandler, wraps Calcite in a 
JDBCStream, and executes the query.

There is something I learned about TestSQLHandler that I'm not sure is correct:
* quoted identifiers - like 'id' and 'text' aren't valid? These shouldn't be 
referring to columns?

Things to be explored with this approach:
* switch from a standard query in SolrEnumerator to a stream
* fix data types
* optimize cases like where

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7729) ConcurrentUpdateSolrClient ignoring the collection parameter in some methods

2016-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206885#comment-15206885
 ] 

ASF GitHub Bot commented on SOLR-7729:
--

GitHub user nicolasgavalda opened a pull request:

https://github.com/apache/lucene-solr/pull/24

SOLR-7729: ConcurrentUpdateSolrClient ignoring the collection parameter in 
some methods

This is a fix for SOLR-7729.
I submitted a similar patch on JIRA for the 5.2.1 version, this is an 
updated version for the master branch.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/nicolasgavalda/lucene-solr SOLR-7729

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/24.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #24


commit c9c14eb96525b90271ce6acc4594db12d6799bf3
Author: Nicolas Gavalda 
Date:   2016-03-22T16:11:55Z

SOLR-7729: ConcurrentUpdateSolrClient ignoring the collection parameter
in some methods.




> ConcurrentUpdateSolrClient ignoring the collection parameter in some methods
> 
>
> Key: SOLR-7729
> URL: https://issues.apache.org/jira/browse/SOLR-7729
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.1
>Reporter: Jorge Luis Betancourt Gonzalez
>  Labels: client, solrj
> Attachments: SOLR-7729-ConcurrentUpdateSolrClient-collection.patch
>
>
> Some of the methods in {{ConcurrentUpdateSolrClient}} accept an aditional 
> {{collection}} parameter, some of this methods are: {{add(String collection, 
> SolrInputDocument doc)}} and {{request(SolrRequest, String collection)}}. 
> This collection parameter is being ignored in this cases but works for others 
> like {{commit(String collection)}}.
> [~elyograg] noted that:
> {quote} 
> Looking into how an update request actually gets added to the background
> queue in ConcurrentUpdateSolrClient, it appears that the "collection"
> information is ignored before the request is added to the queue.
> {quote}
> From the source, when a commit is issued or the 
> {{UpdateParams.WAIT_SEARCHER}} is set in the request params the collection 
> parameter is used, otherwise the request {{UpdateRequest req}} is queued 
> without any regarding of the collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-7729: ConcurrentUpdateSolrClient ig...

2016-03-22 Thread nicolasgavalda
GitHub user nicolasgavalda opened a pull request:

https://github.com/apache/lucene-solr/pull/24

SOLR-7729: ConcurrentUpdateSolrClient ignoring the collection parameter in 
some methods

This is a fix for SOLR-7729.
I submitted a similar patch on JIRA for the 5.2.1 version, this is an 
updated version for the master branch.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/nicolasgavalda/lucene-solr SOLR-7729

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/24.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #24


commit c9c14eb96525b90271ce6acc4594db12d6799bf3
Author: Nicolas Gavalda 
Date:   2016-03-22T16:11:55Z

SOLR-7729: ConcurrentUpdateSolrClient ignoring the collection parameter
in some methods.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7127) remove epsilon-based testing from lucene/spatial

2016-03-22 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206853#comment-15206853
 ] 

Robert Muir commented on LUCENE-7127:
-

also i want to defer removal of the multithreading in the test and other things 
as much as possible. this issue has got enough going on as-is :)

> remove epsilon-based testing from lucene/spatial
> 
>
> Key: LUCENE-7127
> URL: https://issues.apache.org/jira/browse/LUCENE-7127
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Attachments: LUCENE-7127.patch, LUCENE-7127.patch, LUCENE-7127.patch, 
> LUCENE-7127.patch
>
>
> Currently, the random tests here allow a TOLERANCE and will fail if the error 
> exceeds. But this is not fun to debug! It also keeps the door wide open for 
> bugs to creep in.
> Alternatively, we can rework the tests like we did for sandbox/ points. This 
> means the test is aware of the index-time quantization and so it can demand 
> exact answers.
> Its more difficult at first, because even floating point error can cause a 
> failure. It requires us to maybe work through corner cases/rework 
> optimizations. If any epsilons must be added, they can be added to the 
> optimizations themselves (e.g. bounding box) instead of the user's result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7127) remove epsilon-based testing from lucene/spatial

2016-03-22 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-7127:

Attachment: LUCENE-7127.patch

Updated patch: all tests pass including the new ones.

I used a simplified version of the same logic as LatLonPoint: the algorithm is 
sane. It also prevents the slowness in tests (and maybe in real queries?)

I will try to figure out how to benchmark, to ensure there is no big regression 
or anything. If there is a minor drop in perf, I am happy to optimize it 
further, same as LatLonPoint, but at the moment I think we need to focus on 
correctness.

> remove epsilon-based testing from lucene/spatial
> 
>
> Key: LUCENE-7127
> URL: https://issues.apache.org/jira/browse/LUCENE-7127
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
> Attachments: LUCENE-7127.patch, LUCENE-7127.patch, LUCENE-7127.patch, 
> LUCENE-7127.patch
>
>
> Currently, the random tests here allow a TOLERANCE and will fail if the error 
> exceeds. But this is not fun to debug! It also keeps the door wide open for 
> bugs to creep in.
> Alternatively, we can rework the tests like we did for sandbox/ points. This 
> means the test is aware of the index-time quantization and so it can demand 
> exact answers.
> Its more difficult at first, because even floating point error can cause a 
> failure. It requires us to maybe work through corner cases/rework 
> optimizations. If any epsilons must be added, they can be added to the 
> optimizations themselves (e.g. bounding box) instead of the user's result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 470 - Failure!

2016-03-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/470/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testAll

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:42952 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:42952 within 3 ms
at 
__randomizedtesting.SeedInfo.seed([C7412237209FC9C6:8B1F0C4E5B0F9603]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:181)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:115)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:110)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:97)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.printLayout(AbstractDistribZkTestBase.java:283)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.distribTearDown(AbstractFullDistribZkTestBase.java:1494)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:973)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:42952 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:228)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:173)
... 37 more


FAILED:  

[jira] [Updated] (SOLR-8327) SolrDispatchFilter is not caching new state format, which results in live fetch from ZK per request if node does not contain core from collection

2016-03-22 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8327:
-
Assignee: Varun Thacker  (was: Noble Paul)

> SolrDispatchFilter is not caching new state format, which results in live 
> fetch from ZK per request if node does not contain core from collection
> -
>
> Key: SOLR-8327
> URL: https://issues.apache.org/jira/browse/SOLR-8327
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: solrcloud
>
> While perf testing with non-solrj client (request can be sent to any solr 
> node), we noticed a huge amount of data from Zookeeper in our tcpdump (~1G 
> for 20 second dump). From the thread dump, we noticed this:
> java.lang.Object.wait (Native Method)
> java.lang.Object.wait (Object.java:503)
> org.apache.zookeeper.ClientCnxn.submitRequest (ClientCnxn.java:1309)
> org.apache.zookeeper.ZooKeeper.getData (ZooKeeper.java:1152)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:345)
> org.apache.solr.common.cloud.SolrZkClient$7.execute (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation 
> (ZkCmdExecutor.java:61)
> org.apache.solr.common.cloud.SolrZkClient.getData (SolrZkClient.java:342)
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive 
> (ZkStateReader.java:841)
> org.apache.solr.common.cloud.ZkStateReader$7.get (ZkStateReader.java:515)
> org.apache.solr.common.cloud.ClusterState.getCollectionOrNull 
> (ClusterState.java:175)
> org.apache.solr.common.cloud.ClusterState.getLeader (ClusterState.java:98)
> org.apache.solr.servlet.HttpSolrCall.getCoreByCollection 
> (HttpSolrCall.java:784)
> org.apache.solr.servlet.HttpSolrCall.init (HttpSolrCall.java:272)
> org.apache.solr.servlet.HttpSolrCall.call (HttpSolrCall.java:417)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:210)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:179)
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter 
> (ServletHandler.java:1652)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:585)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:577)
> org.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:223)
> org.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1127)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:515)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1061)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:215)
> org.eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:110)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:97)
> org.eclipse.jetty.server.Server.handle (Server.java:499)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:310)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:257)
> org.eclipse.jetty.io.AbstractConnection$2.run (AbstractConnection.java:540)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:635)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:555)
> java.lang.Thread.run (Thread.java:745)
> Looks like SolrDispatchFilter doesn't have caching similar to the 
> collectionStateCache in CloudSolrClient, so if the node doesn't know about a 
> collection in the new state format, it just live-fetch it from Zookeeper on 
> every request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-03-22 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206749#comment-15206749
 ] 

Hoss Man commented on SOLR-445:
---


bq. What is the impact of many docs failing due to missing ID? Is there a test 
for that? I couldn't find one, but the diff is pretty big, I may have missed 
stuff.

good question -- there were checks of this in TolerantUpdateProcessorTest (from 
the early days of this patch) but i added some to 
TestTolerantUpdateProcessorCloud which uncovered a bug (now fixed) when 
checking isLeader -- see: cc2cd23ca2537324dc7e4afe6a29605bbf9f1cb8

bq. Don't know the answer to the "isLeader" question. I'd say the request would 
fail if leader changes in the middle of a request, but I'm not sure.

Hmm... can you explain more what you think/expect could go wrong with the 
isLeader code removed that wouldn't go wrong with the code as it is today?   I 
mean ... theoretically, even with the isLeader check as we have it right now, 
the leader could change between the time we do the isLeader check and the call 
to super.processAdd (where DUP will do it's own isLeader check) ... or it could 
change (again) between the time super.processAdd/DUP.processAdd throws an 
exception and the time we make a decision wetherto only track it or track and 
immediately re-throw.

I'm just not sure if that added code is really gaining us anything useful -- 
but if someone can help me understand (or better still: demonstrate with a 
test) a concrete situation where the current code does the correct thing, but 
removeing the isLeader check is broken then i'll be convinced.



Where things currently stand:

* The only remaining nocommits on the branch are questions about deleting the 
isLeader code, and questions about deleting DistribTolerantUpdateProcessorTest 
since we have other more robust cloud tests now.
* Even with the "retry after giving serachers time to reopen" logic in 
TestTolerantUpdateProcessorRandomCloud, i'm seeing a failure that reproduces 
consistently for me...{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestTolerantUpdateProcessorRandomCloud 
-Dtests.method=testRandomUpdates -Dtests.seed=ECFD2B9118A542E7 
-Dtests.slow=true -Dtests.locale=bg -Dtests.timezone=Asia/Taipei 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] FAILURE 6.00s | 
TestTolerantUpdateProcessorRandomCloud.testRandomUpdates <<<
   [junit4]> Throwable #1: java.lang.AssertionError: cloud client doc count 
doesn't match bitself cardinality expected:<22> but was:<23>
{noformat}...so i'm currently working to improve the logging and trace through 
the test to understand that.



> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8881) test & document (and improve as possible) behavior of TolerantUpdateProcessor while shard splitting is in progress

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206744#comment-15206744
 ] 

ASF subversion and git services commented on SOLR-8881:
---

Commit c740e69622f3c0295498f02e76e42af6341ba333 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c740e69 ]

SOLR-8881: replace nocommits with doc note and link to jira


> test & document (and improve as possible) behavior of TolerantUpdateProcessor 
> while shard splitting is in progress
> --
>
> Key: SOLR-8881
> URL: https://issues.apache.org/jira/browse/SOLR-8881
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>
> TolerantUpdateProcessor is being added in SOLR-445 but it's not entirely 
> obvious what the behavior should be when using something like this is used in 
> conjunction with Shard Splitting.
> In particular what should / shouldn't happen if an update error occurs on a 
> subShardLeader (while the shard is actively being split) after the update 
> already succeded on the original shard leader.  when TUP is not used, this 
> error is propogated back to the client -- but if TUP is being used, then 
> should the subShardLeader' error be propogated back as a tolerated error, or 
> a hard failure?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8862) /live_nodes is populated too early to be very useful for clients -- CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other ephemeral zk node to know

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206745#comment-15206745
 ] 

ASF subversion and git services commented on SOLR-8862:
---

Commit b6be74f2182c46a10f861556ea81d3ed1a79a308 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b6be74f ]

SOLR-8862 work around.  Maybe something like this should be promoted into 
MiniSolrCloudCluster's start() method? or SolrCloudTestCase's configureCluster?


> /live_nodes is populated too early to be very useful for clients -- 
> CloudSolrClient (and MiniSolrCloudCluster.createCollection) need some other 
> ephemeral zk node to knowwhich servers are "ready"
> --
>
> Key: SOLR-8862
> URL: https://issues.apache.org/jira/browse/SOLR-8862
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> {{/live_nodes}} is populated surprisingly early (and multiple times) in the 
> life cycle of a sole node startup, and as a result probably shouldn't be used 
> by {{CloudSolrClient}} (or other "smart" clients) for deciding what servers 
> are fair game for requests.
> we should either fix {{/live_nodes}} to be created later in the lifecycle, or 
> add some new ZK node for this purpose.
> {panel:title=original bug report}
> I haven't been able to make sense of this yet, but what i'm seeing in a new 
> SolrCloudTestCase subclass i'm writing is that the code below, which 
> (reasonably) attempts to create a collection immediately after configuring 
> the MiniSolrCloudCluster gets a "SolrServerException: No live SolrServers 
> available to handle this request" -- in spite of the fact, that (as far as i 
> can tell at first glance) MiniSolrCloudCluster's constructor is suppose to 
> block until all the servers are live..
> {code}
> configureCluster(numServers)
>   .addConfig(configName, configDir.toPath())
>   .configure();
> Map collectionProperties = ...;
> assertNotNull(cluster.createCollection(COLLECTION_NAME, numShards, 
> repFactor,
>configName, null, null, 
> collectionProperties));
> {code}
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8539) Solr queries swallows up OutOfMemoryErrors

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206742#comment-15206742
 ] 

ASF subversion and git services commented on SOLR-8539:
---

Commit fe54da0b58ed18a38f3dd436dd3f30fbee9acbbf in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fe54da0 ]

SOLR-445: remove nocommits related to OOM trapping since SOLR-8539 has 
concluded that this isn't a thing the java code actually needs to be defensive 
of


> Solr queries swallows up OutOfMemoryErrors
> --
>
> Key: SOLR-8539
> URL: https://issues.apache.org/jira/browse/SOLR-8539
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.5, master
>
> Attachments: SOLR-8539.patch
>
>
> I was testing a crazy surround query and was hitting OOMs easily with the 
> query. However I saw that the OOM killer wasn't triggered. Here is the stack 
> trace of the error on solr 5.4:
> {code}
> WARN  - 2016-01-12 18:37:03.920; [   x:techproducts] 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3; 
> java.lang.OutOfMemoryError: Java heap space
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.addConditionWaiter(AbstractQueuedSynchronizer.java:1855)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2068)
> at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590)
> at java.lang.Thread.run(Thread.java:745)
> ERROR - 2016-01-12 18:37:03.922; [   x:techproducts] 
> org.apache.solr.common.SolrException; null:java.lang.RuntimeException: 
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.newTermState(Lucene50PostingsReader.java:149)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.(SegmentTermsEnumFrame.java:100)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.getFrame(SegmentTermsEnum.java:215)
> at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.pushFrame(SegmentTermsEnum.java:241)
> at 
> 

[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206743#comment-15206743
 ] 

ASF subversion and git services commented on SOLR-445:
--

Commit 5d93384e724b6f611270e212a4f9bd5b00c38e85 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5d93384 ]

SOLR-445: fix exception msg when CloudSolrClient does async updates that 
(cumulatively) exceed maxErrors

I initially thought it would make sense to refactor 
DistributedUpdatesAsyncException into solr-common and re-use it here, but when 
i started down that path i realized it didn't make any sense since there aren't 
actual exceptions to wrap client side.


> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206741#comment-15206741
 ] 

ASF subversion and git services commented on SOLR-445:
--

Commit fe54da0b58ed18a38f3dd436dd3f30fbee9acbbf in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fe54da0 ]

SOLR-445: remove nocommits related to OOM trapping since SOLR-8539 has 
concluded that this isn't a thing the java code actually needs to be defensive 
of


> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206746#comment-15206746
 ] 

ASF subversion and git services commented on SOLR-445:
--

Commit cc2cd23ca2537324dc7e4afe6a29605bbf9f1cb8 in lucene-solr's branch 
refs/heads/jira/SOLR-445 from [~hossman_luc...@fucit.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cc2cd23 ]

SOLR-445: cloud test & bug fix for docs missing their uniqueKey field


> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 1.3
>Reporter: Will Johnson
>Assignee: Hoss Man
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> Has anyone run into the problem of handling bad documents / failures mid 
> batch.  Ie:
> 
>   
> 1
>   
>   
> 2
> I_AM_A_BAD_DATE
>   
>   
> 3
>   
> 
> Right now solr adds the first doc and then aborts.  It would seem like it 
> should either fail the entire batch or log a message/return a code and then 
> continue on to add doc 3.  Option 1 would seem to be much harder to 
> accomplish and possibly require more memory while Option 2 would require more 
> information to come back from the API.  I'm about to dig into this but I 
> thought I'd ask to see if anyone had any suggestions, thoughts or comments.   
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-03-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206730#comment-15206730
 ] 

Joel Bernstein edited comment on SOLR-8593 at 3/22/16 4:38 PM:
---

[~risdenk] and I have been looking into different approaches for this ticket. 

One of the approaches is to embed the Calcite SQL parser and optimizer inside 
the SQLHandler. The entry point for this appears to be:

https://calcite.apache.org/apidocs/org/apache/calcite/tools/Planner.html

Using this approach we would need to implement two things:

1) A CatalogReader, which the calcite validator and optimizer will use to do 
it's job. The underlying implementation of this should work for the JDBC driver 
also, so we kill two big birds with one stone when this is implemented.

2) A custom RelVisitor, which will rewrite the relational algebra tree 
(RelNode), created by the optimizer. The RelNode tree will need to be mapped to 
the Streaming API. Since the Streaming API already supports parallel relational 
algebra this should be fairly straight forward.

This approach would leave the Solr JDBC driver basically as it is, but provide 
all the hooks needed to finish off the remaining Catalog metadata methods.





was (Author: joel.bernstein):
[~risdenk] and I have been looking into different approaches for this ticket. 

One of the approaches is to embed the Calcite SQL parser and optimizer inside 
the SQLHandler. The entry point for this appears to be:

https://calcite.apache.org/apidocs/org/apache/calcite/tools/Planner.html

Using this approach we would need to implement two things:

1) A CatalogReader, which the calcite validator and optimizer will use to do 
it's job. The underlying implementation of this should work for the JDBC driver 
also, so we kill two big birds with one stone when this implemented.

2) A custom RelVisitor, which will rewrite the relational algebra tree 
(RelNode), created by the optimizer. The RelNode tree will need to be mapped to 
the Streaming API. Since the Streaming API already supports parallel relational 
algebra this should be fairly straight forward.

This approach would leave the Solr JDBC driver basically as it is, but provide 
all the hooks needed to finish off the remaining Catalog metadata methods.




> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-03-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206730#comment-15206730
 ] 

Joel Bernstein commented on SOLR-8593:
--

[~risdenk] and I have been looking into different approaches for this ticket. 

One of the approaches is to embed the Calcite SQL parser and optimizer inside 
the SQLHandler. The entry point for this appears to be:

https://calcite.apache.org/apidocs/org/apache/calcite/tools/Planner.html

Using this approach we would need to implement two things:

1) A CatalogReader, which the calcite validator and optimizer will use to do 
it's job. The underlying implementation of this should work for the JDBC driver 
also, so we kill two big birds with one stone when this implemented.

2) A custom RelVisitor, which will rewrite the relational algebra tree 
(RelNode), created by the optimizer. The RelNode tree will need to be mapped to 
the Streaming API. Since the Streaming API already supports parallel relational 
algebra this should be fairly straight forward.

This approach would leave the Solr JDBC driver basically as it is, but provide 
all the hooks needed to finish off the remaining Catalog metadata methods.




> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
> Fix For: master
>
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7683) Introduce support to identify Solr internal request types

2016-03-22 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206707#comment-15206707
 ] 

Hrishikesh Gadre commented on SOLR-7683:


I am going to resume work on this one.

> Introduce support to identify Solr internal request types
> -
>
> Key: SOLR-7683
> URL: https://issues.apache.org/jira/browse/SOLR-7683
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>Assignee: Ramkumar Aiyengar
> Attachments: SOLR-7683.patch, SOLR-7683.patch
>
>
> SOLR-7344 is introducing support to partition the Jetty worker pool to 
> enforce the number of concurrent requests for various types (e.g. 
> Internal_Querying, Internal_Indexing, External etc.). For this we need to 
> identify requests sent between Solr servers and their types (i.e. 
> Querying/Indexing etc.).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8884) fl=score returns a different value than that of Explain's

2016-03-22 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206710#comment-15206710
 ] 

Alessandro Benedetti commented on SOLR-8884:


In my case it happened when testing a re-ranking capability.
The explain debug ranking was the correct one, with correct and expected scores.
The results in the response were wrongly scored and ranked.
I've never gone back to that and while I was testing, starting and restarting, 
the problem disappeared quite suddenly and I was not able to reproduce it.
I know the information I added is almost null, hopefully we can get more 
evidence from other people !

Cheers

> fl=score returns a different value than that of Explain's
> -
>
> Key: SOLR-8884
> URL: https://issues.apache.org/jira/browse/SOLR-8884
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 5.5
>Reporter: Ahmet Arslan
> Attachments: debug.xml
>
>
> Some of the folks 
> [reported|http://find.searchhub.org/document/80666f5c3b86ddda] that sometimes 
> explain's score can be different than the score requested by fields 
> parameter. Interestingly, Explain's scores would create a different ranking 
> than the original result list. This is something users experience, but it 
> cannot be re-produced deterministically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 968 - Still Failing

2016-03-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/968/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=29925, name=collection4, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=29925, name=collection4, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:51210/_fyq/j: collection already exists: 
awholynewstresscollection_collection4_0
at __randomizedtesting.SeedInfo.seed([8BB222E3EAD78653]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:577)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1593)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1614)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:970)


FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Error from server at http://127.0.0.1:35801/rr_et/ud/collection1: Error opening 
new searcher

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:35801/rr_et/ud/collection1: Error opening new 
searcher
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:577)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.commit(AbstractFullDistribZkTestBase.java:1523)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:664)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Resolved] (LUCENE-7128) Fix spatial and sandbox geo APIs to consistently take lat before lon

2016-03-22 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7128.

Resolution: Fixed

Thanks [~rcmuir]!

> Fix spatial and sandbox geo APIs to consistently take lat before lon
> 
>
> Key: LUCENE-7128
> URL: https://issues.apache.org/jira/browse/LUCENE-7128
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.0
>
> Attachments: LUCENE-7128.patch
>
>
> Right now sometimes it's lat, lon and other times it's lon, lat which
> is just asking for horrors of biblical proportions.
> I went through and carefully fixed them to take lat, lon in all places
> I could find, and renamed y -> lat and x -> lon.  I also removed
> unused code, or code only called from tests: I think Lucene shouldn't
> just export spatial APIs unless we also ourselves need them for
> indexing and searching.  Finally, I tried to shrink wrap the APIs,
> making previously public apis private if nobody external invoked them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7128) Fix spatial and sandbox geo APIs to consistently take lat before lon

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206672#comment-15206672
 ] 

ASF subversion and git services commented on LUCENE-7128:
-

Commit 09013e09761c1493342826088bef0c62c9233810 in lucene-solr's branch 
refs/heads/branch_6_0 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=09013e0 ]

LUCENE-7128: clean up new geo APIs to consistently take lat before lon, make 
methods private when possible, use lat/lon instead of y/x naming, remove unused 
code


> Fix spatial and sandbox geo APIs to consistently take lat before lon
> 
>
> Key: LUCENE-7128
> URL: https://issues.apache.org/jira/browse/LUCENE-7128
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.0
>
> Attachments: LUCENE-7128.patch
>
>
> Right now sometimes it's lat, lon and other times it's lon, lat which
> is just asking for horrors of biblical proportions.
> I went through and carefully fixed them to take lat, lon in all places
> I could find, and renamed y -> lat and x -> lon.  I also removed
> unused code, or code only called from tests: I think Lucene shouldn't
> just export spatial APIs unless we also ourselves need them for
> indexing and searching.  Finally, I tried to shrink wrap the APIs,
> making previously public apis private if nobody external invoked them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8884) fl=score returns a different value than that of Explain's

2016-03-22 Thread Ahmet Arslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Arslan updated SOLR-8884:
---
Attachment: debug.xml

There is the Rajesh's response file that demonstrates the problem.

> fl=score returns a different value than that of Explain's
> -
>
> Key: SOLR-8884
> URL: https://issues.apache.org/jira/browse/SOLR-8884
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 5.5
>Reporter: Ahmet Arslan
> Attachments: debug.xml
>
>
> Some of the folks 
> [reported|http://find.searchhub.org/document/80666f5c3b86ddda] that sometimes 
> explain's score can be different than the score requested by fields 
> parameter. Interestingly, Explain's scores would create a different ranking 
> than the original result list. This is something users experience, but it 
> cannot be re-produced deterministically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8884) fl=score returns a different value than that of Explain's

2016-03-22 Thread Ahmet Arslan (JIRA)
Ahmet Arslan created SOLR-8884:
--

 Summary: fl=score returns a different value than that of Explain's
 Key: SOLR-8884
 URL: https://issues.apache.org/jira/browse/SOLR-8884
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 5.5
Reporter: Ahmet Arslan


Some of the folks 
[reported|http://find.searchhub.org/document/80666f5c3b86ddda] that sometimes 
explain's score can be different than the score requested by fields parameter. 
Interestingly, Explain's scores would create a different ranking than the 
original result list. This is something users experience, but it cannot be 
re-produced deterministically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8883) Create a convenient way to get the configset name from the CoreDescriptor

2016-03-22 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-8883:


 Summary: Create a convenient way to get the configset name from 
the CoreDescriptor
 Key: SOLR-8883
 URL: https://issues.apache.org/jira/browse/SOLR-8883
 Project: Solr
  Issue Type: Improvement
Reporter: Erick Erickson


Currently, if you are in say an UpdateRequestProcessorChain getting the 
configset from the CoreDescriptor you have to work with is convoluted and 
requires a ZK read. 

Perhaps enhance CloudConfigSetService? Or CoreDescriptor itself?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7128) Fix spatial and sandbox geo APIs to consistently take lat before lon

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206627#comment-15206627
 ] 

ASF subversion and git services commented on LUCENE-7128:
-

Commit c5da271b9d9b05e31a592b8bbdb416529a2c1770 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c5da271 ]

LUCENE-7128: clean up new geo APIs to consistently take lat before lon, make 
methods private when possible, use lat/lon instead of y/x naming, remove unused 
code


> Fix spatial and sandbox geo APIs to consistently take lat before lon
> 
>
> Key: LUCENE-7128
> URL: https://issues.apache.org/jira/browse/LUCENE-7128
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.0
>
> Attachments: LUCENE-7128.patch
>
>
> Right now sometimes it's lat, lon and other times it's lon, lat which
> is just asking for horrors of biblical proportions.
> I went through and carefully fixed them to take lat, lon in all places
> I could find, and renamed y -> lat and x -> lon.  I also removed
> unused code, or code only called from tests: I think Lucene shouldn't
> just export spatial APIs unless we also ourselves need them for
> indexing and searching.  Finally, I tried to shrink wrap the APIs,
> making previously public apis private if nobody external invoked them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Why does CloudConfigSetService.configName return the collection?

2016-03-22 Thread Erick Erickson
I'll open a JIRA in a second. My puzzlement was the mis-match between
the method name and the returned value struck me as odd...

Not a big deal, the root question is how to get the configset name
from a core descriptor.

Erick

On Tue, Mar 22, 2016 at 1:17 AM, Alan Woodward  wrote:
> The configset name is only used in logging at the moment.  I agree that it
> would be useful to get the config that it was loaded from as well as the
> collection it's to be used for, though.  I'd say open a JIRA.
>
> Alan Woodward
> www.flax.co.uk
>
>
> On 22 Mar 2016, at 01:21, Erick Erickson wrote:
>
> A client pointed this out. For arcane reasons they wanted to get the
> configset name given a CoreDescriptor and were trying to use
> CloudConfigSetService.configName(). The code for that method is:
>
> public String configName(CoreDescriptor cd) {
>  return "collection " + cd.getCloudDescriptor().getCollectionName();
> }
>
>
> Does this ring any bells? Should I raise a JIRA? I'm not completely
> sure what the right thing to do here is, but it seems odd to return
> the collection from this method.
>
> Alan Woodward: This is part of SOLR-4478, do you have any recollection
> of why it was done this way?
>
> Erick
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7128) Fix spatial and sandbox geo APIs to consistently take lat before lon

2016-03-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206617#comment-15206617
 ] 

ASF subversion and git services commented on LUCENE-7128:
-

Commit 275a259b1fa0d94aec95f554c2c7451b8678bd8e in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=275a259 ]

LUCENE-7128: clean up new geo APIs to consistently take lat before lon, make 
methods private when possible, use lat/lon instead of y/x naming, remove unused 
code


> Fix spatial and sandbox geo APIs to consistently take lat before lon
> 
>
> Key: LUCENE-7128
> URL: https://issues.apache.org/jira/browse/LUCENE-7128
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.0
>
> Attachments: LUCENE-7128.patch
>
>
> Right now sometimes it's lat, lon and other times it's lon, lat which
> is just asking for horrors of biblical proportions.
> I went through and carefully fixed them to take lat, lon in all places
> I could find, and renamed y -> lat and x -> lon.  I also removed
> unused code, or code only called from tests: I think Lucene shouldn't
> just export spatial APIs unless we also ourselves need them for
> indexing and searching.  Finally, I tried to shrink wrap the APIs,
> making previously public apis private if nobody external invoked them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7129) Prevent @lucene.internal annotated classes from being in Javadocs

2016-03-22 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206579#comment-15206579
 ] 

Robert Muir commented on LUCENE-7129:
-

I think this is all too complex. people will link to the internal classes, then 
we will have broken links.

I would prefer an internal package: we could choose not to export it with the 
java 9 module system, etc. this has already been done before and is easier to 
reason about. also you look at classes full name and you know instantly its 
internal, just like the jdk.

> Prevent @lucene.internal annotated classes from being in Javadocs
> -
>
> Key: LUCENE-7129
> URL: https://issues.apache.org/jira/browse/LUCENE-7129
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/javadocs
>Reporter: David Smiley
>Priority: Minor
>
> It would be cool if we could prevent {{@lucene.internal}} classes from 
> appearing in Javadocs we publish.  This would further discourage use of 
> internal Lucene/Solr classes that are public not for public consumption but 
> only  public so that the code can be accessed across Lucene/Solr's packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7129) Prevent @lucene.internal annotated classes from being in Javadocs

2016-03-22 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206543#comment-15206543
 ] 

Uwe Schindler edited comment on LUCENE-7129 at 3/22/16 3:31 PM:


Nevertheless, the whole "filtering" javadocs approach is not useful to prevent 
people from using the APIs. Nothing can forbid people or their stupid Eclipse 
autocompleter to use the classes we marked as experimental.

The correct fix for this is coming with Java 9, but we can start implementing 
it before:
- Move all internal APIs to a separate package (this is what Robert wants to do 
anyways), e.g., {{org.apache.lucene.internal}}.
- Don't export this package in {{module-info.java}}, so it gets completely 
invisible to anybody using the JAR file as a module ({{-modulepath}} instead of 
{{-classpath}}). Only lucene's own modules are allowed to refer to those 
packages through explicit export "to".
- Javadoc would work automatically, because Java 9 Javadocs does not document 
non-exported packages.

This approach should be done at some point anyways, but it needs some 
refactoring of package names. Most is fine, but some JAR files share packages 
with others. This is no longer possible with Java 9 modules! E.g., Misc modules 
{{oal.index}} package would need to be renamed, because it conflicts with the 
module exported by lucene-core.jar.


was (Author: thetaphi):
Nevertheless, the whole "filtering" javadocs approach is not useful to prevent 
people from using the APIs. Nothing can forbid people or their stupid Eclipse 
autocompleter to use the classes we marked as experimental.

The correct fix for this is coming with Java 9, but we can start implementing 
it before:
- Move all internal APIs to a separate package (this is what Robert wants to do 
anyways), e.g., {{org.apache.lucene.internal}}.
- Don't export this package in {{module-info.java}}, so it gets completely 
invisible to anybody using the JAR file as a module ({{-modulepath}} instead of 
{{-classpath}}). Only lucene's own modules are allowed to refer to those 
modules.
- Javadoc would work automatically, because Java 9 Javadocs does not document 
non-exported packages.

This approach should be done at some point anyways, but it needs some 
refactoring of package names. Most is fine, but some JAR files share packages 
with others. This is no longer possible with Java 9 modules! E.g., Misc modules 
{{oal.index}} package would need to be renamed, because it conflicts with the 
module exported by lucene-core.jar.

> Prevent @lucene.internal annotated classes from being in Javadocs
> -
>
> Key: LUCENE-7129
> URL: https://issues.apache.org/jira/browse/LUCENE-7129
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/javadocs
>Reporter: David Smiley
>Priority: Minor
>
> It would be cool if we could prevent {{@lucene.internal}} classes from 
> appearing in Javadocs we publish.  This would further discourage use of 
> internal Lucene/Solr classes that are public not for public consumption but 
> only  public so that the code can be accessed across Lucene/Solr's packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7129) Prevent @lucene.internal annotated classes from being in Javadocs

2016-03-22 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206555#comment-15206555
 ] 

Uwe Schindler commented on LUCENE-7129:
---

bq. Uwe, your "Quick'n'dirty" method would only work for entire classes that 
have the @lucene.internal annotation - there are places where the annotation is 
on individual methods.

I know, because of that I gave also the 2nd way ("clean approach"). This is 
documented like that on Javadoc's documentation [FAQ 
page|http://www.oracle.com/technetwork/java/javase/documentation/index-137483.html#exclude]
 at Oracle. They refer to a "custom doclet" to do more filtering, but don't 
give an example. The example above is a possible "cheap & elegant" 
implementation - of course violating forbiddenapis (internal packages, we have 
to exclude).

> Prevent @lucene.internal annotated classes from being in Javadocs
> -
>
> Key: LUCENE-7129
> URL: https://issues.apache.org/jira/browse/LUCENE-7129
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/javadocs
>Reporter: David Smiley
>Priority: Minor
>
> It would be cool if we could prevent {{@lucene.internal}} classes from 
> appearing in Javadocs we publish.  This would further discourage use of 
> internal Lucene/Solr classes that are public not for public consumption but 
> only  public so that the code can be accessed across Lucene/Solr's packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7129) Prevent @lucene.internal annotated classes from being in Javadocs

2016-03-22 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206543#comment-15206543
 ] 

Uwe Schindler commented on LUCENE-7129:
---

Nevertheless, the whole "filtering" javadocs approach is not useful to prevent 
people from using the APIs. Nothing can forbid people or their stupid Eclipse 
autocompleter to use the classes we marked as experimental.

The correct fix for this is coming with Java 9, but we can start implementing 
it before:
- Move all internal APIs to a separate package (this is what Robert wants to do 
anyways), e.g., {{org.apache.lucene.internal}}.
- Don't export this package in {{module-info.java}}, so it gets completely 
invisible to anybody using the JAR file as a module ({{-modulepath}} instead of 
{{-classpath}}). Only lucene's own modules are allowed to refer to those 
modules.
- Javadoc would work automatically, because Java 9 Javadocs does not document 
non-exported packages.

This approach should be done at some point anyways, but it needs some 
refactoring of package names. Most is fine, but some JAR files share packages 
with others. This is no longer possible with Java 9 modules! E.g., Misc modules 
{{oal.index}} package would need to be renamed, because it conflicts with the 
module exported by lucene-core.jar.

> Prevent @lucene.internal annotated classes from being in Javadocs
> -
>
> Key: LUCENE-7129
> URL: https://issues.apache.org/jira/browse/LUCENE-7129
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/javadocs
>Reporter: David Smiley
>Priority: Minor
>
> It would be cool if we could prevent {{@lucene.internal}} classes from 
> appearing in Javadocs we publish.  This would further discourage use of 
> internal Lucene/Solr classes that are public not for public consumption but 
> only  public so that the code can be accessed across Lucene/Solr's packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7129) Prevent @lucene.internal annotated classes from being in Javadocs

2016-03-22 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206530#comment-15206530
 ] 

Steve Rowe commented on LUCENE-7129:


Uwe, your "Quick'n'dirty" method would only work for entire classes that have 
the {{@lucene.internal}} annotation - there are places where the annotation is 
on individual methods.  This method would either over-exclude (annotation 
occurs anywhere in file) or under-exclude (testing only for class annotation).

> Prevent @lucene.internal annotated classes from being in Javadocs
> -
>
> Key: LUCENE-7129
> URL: https://issues.apache.org/jira/browse/LUCENE-7129
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/javadocs
>Reporter: David Smiley
>Priority: Minor
>
> It would be cool if we could prevent {{@lucene.internal}} classes from 
> appearing in Javadocs we publish.  This would further discourage use of 
> internal Lucene/Solr classes that are public not for public consumption but 
> only  public so that the code can be accessed across Lucene/Solr's packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7129) Prevent @lucene.internal annotated classes from being in Javadocs

2016-03-22 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206514#comment-15206514
 ] 

Uwe Schindler edited comment on LUCENE-7129 at 3/22/16 3:11 PM:


There are two ways to do this:

- Quick'n'dirty: We have to change our filesets (for sources parameter) before 
invoking the javadocs macro. As Javadocs processes all java files one by one, 
we may use a fileset instead of the generic packageset element inside the 
javadoc, excluding those java files we don't want to have. The fileset with tha 
java files we pass to javadoc would just need  to filter on the contents (file 
contains "@lucene.internal" as an additional 
[selector|https://ant.apache.org/manual/Types/selectors.html] in the fileset 
defintion). Of course this slows down a bit, because Ant has to read all files 
while building the fileset.

- More sophisticated: Use our own Doclet that delegates everything to standard 
doclet, but handles the tags we want to eclude. Somebody coded this; we could 
add to our tools folder: http://sixlegs.com/blog/java/exclude-javadoc-tag.html 
(maybe its somewehere in Maven). The issue with doclets is the API hell with 
interfaces, but this guy has a good way around that (he dynamically creates a 
proxy for every interface and uses it to delegate).


was (Author: thetaphi):
There are two ways to do this:

- Quick'n'dirty: We have to change our filesets (for sources parameter) before 
invoking the javadocs macro. As Javadocs processes all java files one by one, 
we may use a fileset instead of the generic packageset element iside the 
javadoc, excluding those class files we don't want to have. The fileset with 
tha java files we pass to javadoc would just need  to filter on the contents 
(file contains "@lucene.internal" as an additional 
[selector|https://ant.apache.org/manual/Types/selectors.html] in the fileset 
defintion). Of course this slows down a bit, because Ant has to read all files 
while building the fileset.

- More sophisticated: Use our own Doclet that delegates everything to standard 
doclet, but handles the tags we want to eclude. Somebody coded this; we could 
add to our tools folder: http://sixlegs.com/blog/java/exclude-javadoc-tag.html 
(maybe its somewehere in Maven). The issue with doclets is the API hell with 
interfaces, but this guy has a good way around that (he dynamically creates a 
proxy for every interface and uses it to delegate).

> Prevent @lucene.internal annotated classes from being in Javadocs
> -
>
> Key: LUCENE-7129
> URL: https://issues.apache.org/jira/browse/LUCENE-7129
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/javadocs
>Reporter: David Smiley
>Priority: Minor
>
> It would be cool if we could prevent {{@lucene.internal}} classes from 
> appearing in Javadocs we publish.  This would further discourage use of 
> internal Lucene/Solr classes that are public not for public consumption but 
> only  public so that the code can be accessed across Lucene/Solr's packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7129) Prevent @lucene.internal annotated classes from being in Javadocs

2016-03-22 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206514#comment-15206514
 ] 

Uwe Schindler commented on LUCENE-7129:
---

There are two ways to do this:

- Quick'n'dirty: We have to change our filesets (for sources parameter) before 
invoking the javadocs macro. As Javadocs processes all java files one by one, 
we may use a fileset instead of the generic packageset element iside the 
javadoc, excluding those class files we don't want to have. The fileset with 
tha java files we pass to javadoc would just need  to filter on the contents 
(file contains "@lucene.internal" as an additional 
[selector|https://ant.apache.org/manual/Types/selectors.html] in the fileset 
defintion). Of course this slows down a bit, because Ant has to read all files 
while building the fileset.

- More sophisticated: Use our own Doclet that delegates everything to standard 
doclet, but handles the tags we want to eclude. Somebody coded this; we could 
add to our tools folder: http://sixlegs.com/blog/java/exclude-javadoc-tag.html 
(maybe its somewehere in Maven). The issue with doclets is the API hell with 
interfaces, but this guy has a good way around that (he dynamically creates a 
proxy for every interface and uses it to delegate).

> Prevent @lucene.internal annotated classes from being in Javadocs
> -
>
> Key: LUCENE-7129
> URL: https://issues.apache.org/jira/browse/LUCENE-7129
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/javadocs
>Reporter: David Smiley
>Priority: Minor
>
> It would be cool if we could prevent {{@lucene.internal}} classes from 
> appearing in Javadocs we publish.  This would further discourage use of 
> internal Lucene/Solr classes that are public not for public consumption but 
> only  public so that the code can be accessed across Lucene/Solr's packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7122) BytesRefArray can be more efficient for fixed width values

2016-03-22 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206510#comment-15206510
 ] 

Dawid Weiss commented on LUCENE-7122:
-

Of course I  wouldn't veto it. I just expressed my opinion that cramming too 
much logic into one class makes it very difficult to read later on (and has 
potential performance implications). This is what happened to fst builder, for 
example - I can't understand portions of the code anymore even though I was 
actively participating in it at the beginning. :)

> BytesRefArray can be more efficient for fixed width values
> --
>
> Key: LUCENE-7122
> URL: https://issues.apache.org/jira/browse/LUCENE-7122
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7122.patch, LUCENE-7122.patch
>
>
> Today {{BytesRefArray}} uses one int ({{int[]}}, overallocated) per
> value to hold the length, but for dimensional points these values are
> always the same length. 
> This can save another 4 bytes of heap per indexed dimensional point,
> which is a big improvement (more points can fit in heap at once) for
> 1D and 2D lat/lon points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7122) BytesRefArray can be more efficient for fixed width values

2016-03-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206498#comment-15206498
 ] 

Michael McCandless commented on LUCENE-7122:


bq. LUCENE-7129

Thanks!

> BytesRefArray can be more efficient for fixed width values
> --
>
> Key: LUCENE-7122
> URL: https://issues.apache.org/jira/browse/LUCENE-7122
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7122.patch, LUCENE-7122.patch
>
>
> Today {{BytesRefArray}} uses one int ({{int[]}}, overallocated) per
> value to hold the length, but for dimensional points these values are
> always the same length. 
> This can save another 4 bytes of heap per indexed dimensional point,
> which is a big improvement (more points can fit in heap at once) for
> 1D and 2D lat/lon points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7122) BytesRefArray can be more efficient for fixed width values

2016-03-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206496#comment-15206496
 ] 

Michael McCandless commented on LUCENE-7122:


bq. I meant that if Rob needs someone to vouch for the value of the 
efficiencies, he need not look any further than you specifically.

But even that is not really valid :)

The two uses cases (Lucene's new dimensional points, and {{MemoryIndex}} and 
highlighters) are wildly different, so different tradeoffs do apply...

> BytesRefArray can be more efficient for fixed width values
> --
>
> Key: LUCENE-7122
> URL: https://issues.apache.org/jira/browse/LUCENE-7122
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7122.patch, LUCENE-7122.patch
>
>
> Today {{BytesRefArray}} uses one int ({{int[]}}, overallocated) per
> value to hold the length, but for dimensional points these values are
> always the same length. 
> This can save another 4 bytes of heap per indexed dimensional point,
> which is a big improvement (more points can fit in heap at once) for
> 1D and 2D lat/lon points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7122) BytesRefArray can be more efficient for fixed width values

2016-03-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206494#comment-15206494
 ] 

Michael McCandless commented on LUCENE-7122:


I think people may be underestimating the priority of this issue:

{{OfflineSorter}}, now suddenly heavily used by Lucene's new dimensional 
points, is like a baby Lucene: it pulls values into heap, up until its budget, 
and then sorts them and writes another segment, having to merge them all in the 
end.  I have watched it go through its file slowly while indexing 3.2B OSM 
points ;)

This patch would mean we can store 33% more {{IntPoint}} s in heap before 
writing a segment, which is an amazing improvement, especially when it can mean 
0 vs 1 merges needed, or 1 vs 2 merges needed, etc., for many use cases.  No 
matter how fast your SSD is, having to do 1 instead of 2 merges is a big win.

If I had a way to make Lucene's {{IndexWriter}} postings heap buffer 33% more 
RAM efficient, that would be incredible :)

And yes I know {{OfflineSorter}} is also used by non-fixed-length users (e.g. 
suggesters, and possibly/probably external users) but I think this new core 
usage for numerics and geo of it is (suddenly) the most important usage of it 
by Lucene.

[~dawid.weiss], do you disagree so much with the first patch that you would 
veto it?  If it's OK, I'd rather commit that approach, and open followon issues 
to improve it later.  I prefer that patch, since it adds no new 
classes/interfaces, and (like Lucene's doc values) it hides all heap storage 
optimizations under the hood.  {{OfflineSorter}} is typically IO bound, so I 
don't think we should fret about the added conditionals for the CPU.

> BytesRefArray can be more efficient for fixed width values
> --
>
> Key: LUCENE-7122
> URL: https://issues.apache.org/jira/browse/LUCENE-7122
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7122.patch, LUCENE-7122.patch
>
>
> Today {{BytesRefArray}} uses one int ({{int[]}}, overallocated) per
> value to hold the length, but for dimensional points these values are
> always the same length. 
> This can save another 4 bytes of heap per indexed dimensional point,
> which is a big improvement (more points can fit in heap at once) for
> 1D and 2D lat/lon points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8640) CloudSolrClient does not send the credentials set in the UpdateRequest

2016-03-22 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-8640.
--
   Resolution: Fixed
Fix Version/s: 5.5

> CloudSolrClient does not send the credentials set in the UpdateRequest
> --
>
> Key: SOLR-8640
> URL: https://issues.apache.org/jira/browse/SOLR-8640
> Project: Solr
>  Issue Type: Bug
>  Components: security, SolrJ
>Affects Versions: 5.4
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.0, 5.5
>
> Attachments: SOLR-8640.patch
>
>
> CloudSolrClient copies the UpdateRequest, but not the credentials. So 
> BasicAuth does not work if u use CloudSolrClient 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-jigsaw-ea+110) - Build # 16295 - Failure!

2016-03-22 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16295/
Java: 32bit/jdk-9-jigsaw-ea+110 -server -XX:+UseSerialGC -XX:-CompactStrings

All tests passed

Build Log:
[...truncated 11422 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp/junit4-J1-20160322_141138_241.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0xf6bc8ab2, pid=13025, tid=13043
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (9.0+110) (build 
9-ea+110-2016-03-17-023140.javare.4664.nc)
   [junit4] # Java VM: Java HotSpot(TM) Server VM 
(9-ea+110-2016-03-17-023140.javare.4664.nc, mixed mode, tiered, serial gc, 
linux-x86)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x48cab2]  FastScanClosure::do_oop(oopDesc**)+0x22
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/hs_err_pid13025.log
   [junit4] Compiled method (c2)  816834 28210   !   4   
org.apache.solr.update.SolrCmdDistributor::submit (301 bytes)
   [junit4]  total in heap  [0xf0890b08,0xf08959e4] = 20188
   [junit4]  relocation [0xf0890be0,0xf0890f94] = 948
   [junit4]  main code  [0xf0890fa0,0xf0892580] = 5600
   [junit4]  stub code  [0xf0892580,0xf0892718] = 408
   [junit4]  oops   [0xf0892718,0xf089273c] = 36
   [junit4]  metadata   [0xf089273c,0xf0892888] = 332
   [junit4]  scopes data[0xf0892888,0xf0894bf8] = 9072
   [junit4]  scopes pcs [0xf0894bf8,0xf0895318] = 1824
   [junit4]  dependencies   [0xf0895318,0xf08953e8] = 208
   [junit4]  handler table  [0xf08953e8,0xf0895928] = 1344
   [junit4]  nul chk table  [0xf0895928,0xf08959e4] = 188
   [junit4] Compiled method (c2)  816834 28210   !   4   
org.apache.solr.update.SolrCmdDistributor::submit (301 bytes)
   [junit4]  total in heap  [0xf0890b08,0xf08959e4] = 20188
   [junit4]  relocation [0xf0890be0,0xf0890f94] = 948
   [junit4]  main code  [0xf0890fa0,0xf0892580] = 5600
   [junit4]  stub code  [0xf0892580,0xf0892718] = 408
   [junit4]  oops   [0xf0892718,0xf089273c] = 36
   [junit4]  metadata   [0xf089273c,0xf0892888] = 332
   [junit4]  scopes data[0xf0892888,0xf0894bf8] = 9072
   [junit4]  scopes pcs [0xf0894bf8,0xf0895318] = 1824
   [junit4]  dependencies   [0xf0895318,0xf08953e8] = 208
   [junit4]  handler table  [0xf08953e8,0xf0895928] = 1344
   [junit4]  nul chk table  [0xf0895928,0xf08959e4] = 188
   [junit4] #
   [junit4] # If you would like to submit a bug report, please v
   [junit4] isit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J1: EOF 

[...truncated 994 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/home/jenkins/tools/java/32bit/jdk-9-jigsaw-ea+110/bin/java -server 
-XX:+UseSerialGC -XX:-CompactStrings -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-master-Linux/heapdumps -ea 
-esa -Dtests.prefix=tests -Dtests.seed=DABB5C240B3A530E -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=7.0.0 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp
 -Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=7.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true -Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=UTF-8 -classpath 

[jira] [Commented] (LUCENE-7122) BytesRefArray can be more efficient for fixed width values

2016-03-22 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15206468#comment-15206468
 ] 

David Smiley commented on LUCENE-7122:
--

bq. +1, can you open a new issue?

LUCENE-7129

bq. "Mike" is not using it. Names should not be attached to our source code 

Peace.  I meant that if Rob needs someone to vouch for the value of the 
efficiencies, he need not look any further than you specifically.

bq. Hmm but the only valid reason to use BytesRefArray instead of BytesRef[] is 
efficiency?

Fair point.

> BytesRefArray can be more efficient for fixed width values
> --
>
> Key: LUCENE-7122
> URL: https://issues.apache.org/jira/browse/LUCENE-7122
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master, 6.1
>
> Attachments: LUCENE-7122.patch, LUCENE-7122.patch
>
>
> Today {{BytesRefArray}} uses one int ({{int[]}}, overallocated) per
> value to hold the length, but for dimensional points these values are
> always the same length. 
> This can save another 4 bytes of heap per indexed dimensional point,
> which is a big improvement (more points can fit in heap at once) for
> 1D and 2D lat/lon points.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7129) Prevent @lucene.internal annotated classes from being in Javadocs

2016-03-22 Thread David Smiley (JIRA)
David Smiley created LUCENE-7129:


 Summary: Prevent @lucene.internal annotated classes from being in 
Javadocs
 Key: LUCENE-7129
 URL: https://issues.apache.org/jira/browse/LUCENE-7129
 Project: Lucene - Core
  Issue Type: Task
  Components: general/javadocs
Reporter: David Smiley
Priority: Minor


It would be cool if we could prevent {{@lucene.internal}} classes from 
appearing in Javadocs we publish.  This would further discourage use of 
internal Lucene/Solr classes that are public not for public consumption but 
only  public so that the code can be accessed across Lucene/Solr's packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >