[jira] [Comment Edited] (SOLR-5211) updating parent as childless makes old children orphans

2015-12-04 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041236#comment-15041236
 ] 

Mikhail Khludnev edited comment on SOLR-5211 at 12/4/15 8:10 AM:
-

I want to emphasize two things: multilevel nesting is out of scope so far, just 
because we can't deal with the simplest single level parent/child case yet; I 
don't think we can afford to complicate update flow, ie. add {{deleteBy\*}} or 
check are there children (btw, they can be commited yet). 
We need to come up with universal routine which can handle all cases below via 
the single API entry point  [IW.updateDocuments(delTerm, 
docs)|https://github.com/apache/lucene-solr/blob/trunk/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java#L1299]
 or establish the new one: 
- add with/without children
- update (overwrite) with/without children
- let to omit uniquKey in schema, it this case it won't overwrite itself

just to remind, now it work as {{delTerm=$uniqueKey:get($uniqueKey)}} for 
childless, and  {{delTerm=\_root_:get($uniqueKey)}}. here is the problem.

IMHO, -if we just allow $uniqueKey to span across a whole block (q=id:33 
returns a block of several docs), it would be nobrainer, which solves 
everything.- Let's just *always* copy $uniqueKey to \_root_, span it to all 
children and use \_root_:get($uniqueKey) as a delete term?!!   

What I'm missing? 


was (Author: mkhludnev):
I want to emphasize two things: multilevel nesting is out of scope so far, just 
because we can't deal with the simplest single level parent/child case yet; I 
don't think we can afford to complicate update flow, ie. add {{deleteBy\*}} or 
check are there children (btw, they can be commited yet). 
We need to come up with universal routine which can handle all cases below via 
the single API entry point  [IW.updateDocuments(delTerm, 
docs)|https://github.com/apache/lucene-solr/blob/trunk/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java#L1299]
 or establish the new one: 
- add with/without children
- update (overwrite) with/without children
- let to omit uniquKey in schema, it this case it won't overwrite itself

just to remind, now it work as {{delTerm=$uniqueKey:get($uniqueKey)}} for 
childless, and  {{delTerm=\_root_:get($uniqueKey)}}. here is the problem.

IMHO, -if we just allow $uniqueKey to span across a whole block (q=id:33 
returns a block of several docs), it would be nobrainer, which solves 
everything.- Let's just *always* copy $uniqueKey to \_root_ spans it to all 
children and use \_root_:get($uniqueKey) as a delete term,   

What I'm missing? 

> updating parent as childless makes old children orphans
> ---
>
> Key: SOLR-5211
> URL: https://issues.apache.org/jira/browse/SOLR-5211
> Project: Solr
>  Issue Type: Sub-task
>  Components: update
>Affects Versions: 4.5, Trunk
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
> Fix For: Trunk, 5.5
>
>
> if I have parent with children in the index, I can send update omitting 
> children. as a result old children become orphaned. 
> I suppose separate \_root_ fields makes much trouble. I propose to extend 
> notion of uniqueKey, and let it spans across blocks that makes updates 
> unambiguous.  
> WDYT? Do you like to see a test proves this issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5211) updating parent as childless makes old children orphans

2015-12-04 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041236#comment-15041236
 ] 

Mikhail Khludnev edited comment on SOLR-5211 at 12/4/15 8:09 AM:
-

I want to emphasize two things: multilevel nesting is out of scope so far, just 
because we can't deal with the simplest single level parent/child case yet; I 
don't think we can afford to complicate update flow, ie. add {{deleteBy\*}} or 
check are there children (btw, they can be commited yet). 
We need to come up with universal routine which can handle all cases below via 
the single API entry point  [IW.updateDocuments(delTerm, 
docs)|https://github.com/apache/lucene-solr/blob/trunk/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java#L1299]
 or establish the new one: 
- add with/without children
- update (overwrite) with/without children
- let to omit uniquKey in schema, it this case it won't overwrite itself

just to remind, now it work as {{delTerm=$uniqueKey:get($uniqueKey)}} for 
childless, and  {{delTerm=\_root_:get($uniqueKey)}}. here is the problem.

IMHO, -if we just allow $uniqueKey to span across a whole block (q=id:33 
returns a block of several docs), it would be nobrainer, which solves 
everything.- Let's just *always* copy $uniqueKey to \_root_ spans it to all 
children and use \_root_:get($uniqueKey) as a delete term,   

What I'm missing? 


was (Author: mkhludnev):
I want to emphasize two things: multilevel nesting is out of scope so far, just 
because we can't deal with the simplest single level parent/child case yet; I 
don't think we can afford to complicate update flow, ie. add {{deleteBy\*}} or 
check are there children (btw, they can be commited yet). 
We need to come up with universal routine which can handle all cases below via 
the single API entry point  [IW.updateDocuments(delTerm, 
docs)|https://github.com/apache/lucene-solr/blob/trunk/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java#L1299]
 or establish the new one: 
- add with/without children
- update (overwrite) with/without children
- let to omit uniquKey in schema, it this case it won't overwrite itself

just to remind, now it work as {{delTerm=$uniqueKey:get($uniqueKey)}} for 
childless, and  {{delTerm=\_root_:get($uniqueKey)}}. here is the problem.

IMHO, if we just allow $uniqueKey to span across a whole block (q=id:33 returns 
a block of several docs), it would be nobrainer, which solves everything. 

What I'm missing? 

> updating parent as childless makes old children orphans
> ---
>
> Key: SOLR-5211
> URL: https://issues.apache.org/jira/browse/SOLR-5211
> Project: Solr
>  Issue Type: Sub-task
>  Components: update
>Affects Versions: 4.5, Trunk
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
> Fix For: Trunk, 5.5
>
>
> if I have parent with children in the index, I can send update omitting 
> children. as a result old children become orphaned. 
> I suppose separate \_root_ fields makes much trouble. I propose to extend 
> notion of uniqueKey, and let it spans across blocks that makes updates 
> unambiguous.  
> WDYT? Do you like to see a test proves this issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_66) - Build # 5443 - Still Failing!

2015-12-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5443/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:65069/zmld/awholynewcollection_0: non ok 
status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:65069/zmld/awholynewcollection_0: non ok 
status: 500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([71269AD55C7ACE85:F972A50FF286A37D]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:638)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-6918) LRUQueryCache.onDocIdSetEviction should not be called when nothing is evicted

2015-12-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041509#comment-15041509
 ] 

ASF subversion and git services commented on LUCENE-6918:
-

Commit 1717947 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1717947 ]

LUCENE-6918: LRUQueryCache.onDocIdSetEviction is only called when at least one 
DocIdSet is being evicted. (Adrien Grand)

> LRUQueryCache.onDocIdSetEviction should not be called when nothing is evicted
> -
>
> Key: LUCENE-6918
> URL: https://issues.apache.org/jira/browse/LUCENE-6918
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.5
>
> Attachments: LUCENE-6918.patch
>
>
> This method is confusing because it states it will be called "when one or 
> more DocIdSets are removed from this cache" but may actually be called with 
> zero docidsets when evicting a per-segment cache that did not contain any 
> entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6918) LRUQueryCache.onDocIdSetEviction should not be called when nothing is evicted

2015-12-04 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041379#comment-15041379
 ] 

Ryan Ernst commented on LUCENE-6918:


+1, looks good.

For the changes entry, I think you mean to remove the "not" from "is not only 
called"?

> LRUQueryCache.onDocIdSetEviction should not be called when nothing is evicted
> -
>
> Key: LUCENE-6918
> URL: https://issues.apache.org/jira/browse/LUCENE-6918
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.5
>
> Attachments: LUCENE-6918.patch
>
>
> This method is confusing because it states it will be called "when one or 
> more DocIdSets are removed from this cache" but may actually be called with 
> zero docidsets when evicting a per-segment cache that did not contain any 
> entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6918) LRUQueryCache.onDocIdSetEviction should not be called when nothing is evicted

2015-12-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041490#comment-15041490
 ] 

Adrien Grand commented on LUCENE-6918:
--

Good catch!

> LRUQueryCache.onDocIdSetEviction should not be called when nothing is evicted
> -
>
> Key: LUCENE-6918
> URL: https://issues.apache.org/jira/browse/LUCENE-6918
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.5
>
> Attachments: LUCENE-6918.patch
>
>
> This method is confusing because it states it will be called "when one or 
> more DocIdSets are removed from this cache" but may actually be called with 
> zero docidsets when evicting a per-segment cache that did not contain any 
> entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b93) - Build # 15110 - Still Failing!

2015-12-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15110/
Java: 32bit/jdk1.9.0-ea-b93 -client -XX:+UseSerialGC -XX:-CompactStrings

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=4551, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[830BEEBD1B477A53]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:178) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2061)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
2) Thread[id=4777, name=zkCallback-505-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=4552, 
name=zkCallback-505-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=4550, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[830BEEBD1B477A53]-SendThread(127.0.0.1:51938),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:230)  
   at 
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1185)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1110)5) 
Thread[id=4776, name=zkCallback-505-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 
   1) Thread[id=4551, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[830BEEBD1B477A53]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:178)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2061)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
   2) Thread[id=4777, name=zkCallback-505-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at jdk.internal.misc.Unsafe.park(Native 

[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 871 - Still Failing

2015-12-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/871/

All tests passed

Build Log:
[...truncated 10989 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/temp/junit4-J1-20151204_073523_859.sysout
   [junit4] >>> JVM J1: stdout (verbatim) 
   [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] Dumping heap to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/heapdumps/java_pid20638.hprof
 ...
   [junit4] Heap dump file created [607744858 bytes in 5.430 secs]
   [junit4] <<< JVM J1: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/temp/junit4-J1-20151204_073523_859.syserr
   [junit4] >>> JVM J1: stderr (verbatim) 
   [junit4] WARN: Unhandled exception in event serialization. -> 
java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:133)
   [junit4] at java.io.OutputStreamWriter.write(OutputStreamWriter.java:220)
   [junit4] at java.io.Writer.write(Writer.java:157)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.gson.stream.JsonWriter.string(JsonWriter.java:567)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.gson.stream.JsonWriter.value(JsonWriter.java:414)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.AbstractEvent.writeBinaryProperty(AbstractEvent.java:36)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.AppendStdErrEvent.serialize(AppendStdErrEvent.java:30)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.Serializer$2.run(Serializer.java:101)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.Serializer$2.run(Serializer.java:96)
   [junit4] at java.security.AccessController.doPrivileged(Native Method)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.Serializer.flushQueue(Serializer.java:96)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:81)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$3$2.write(SlaveMain.java:457)
   [junit4] at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
   [junit4] at 
java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
   [junit4] at java.io.PrintStream.flush(PrintStream.java:338)
   [junit4] at java.io.FilterOutputStream.flush(FilterOutputStream.java:140)
   [junit4] at java.io.PrintStream.write(PrintStream.java:482)
   [junit4] at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
   [junit4] at 
sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
   [junit4] at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
   [junit4] at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
   [junit4] at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
   [junit4] at 
org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
   [junit4] at 
org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
   [junit4] at 
org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
   [junit4] at 
org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
   [junit4] at 
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
   [junit4] at org.apache.log4j.Category.callAppenders(Category.java:206)
   [junit4] at org.apache.log4j.Category.forcedLog(Category.java:391)
   [junit4] at org.apache.log4j.Category.log(Category.java:856)
   [junit4] at 
org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:304)
   [junit4] <<< JVM J1: EOF 

[...truncated 453 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/x1/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8/jre/bin/java 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/heapdumps
 -ea -esa -Dtests.prefix=tests -Dtests.seed=C6B23123F6B85644 -Xmx512M 
-Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random 
-Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.luceneMatchVersion=6.0.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/lucene/tools/junit4/logging.properties
 -Dtests.nightly=true -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=2 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/temp
 

[jira] [Commented] (SOLR-8366) ConcurrentUpdateSolrClient attempts to use response's content type as charset encoding

2015-12-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041635#comment-15041635
 ] 

ASF subversion and git services commented on SOLR-8366:
---

Commit 1717978 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1717978 ]

SOLR-8366: ConcurrentUpdateSolrClient attempts to use response's content type 
as charset encoding for parsing exception

> ConcurrentUpdateSolrClient attempts to use response's content type as charset 
> encoding
> --
>
> Key: SOLR-8366
> URL: https://issues.apache.org/jira/browse/SOLR-8366
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 5.3, 5.4
>Reporter: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-8336.patch
>
>
> While debugging the SolrExampleStreamingTest.testUpdateField failures on 
> trunk, I noticed that ConcurrentUpdateSolrClient always logs the following 
> when the server throws a conflict error:
> {code}
> WARN  
> (concurrentUpdateScheduler-2-thread-1-processing-http:127.0.0.1:35848//solr//collection1)
>  [] o.a.s.c.s.i.ConcurrentUpdateSolrClient Failed to parse error response 
> from http://127.0.0.1:35848/solr/collection1 due to: 
> org.apache.solr.common.SolrException: parsing error
> {code}
> The problem is the following code which uses the  
> response.getEntity().getContentType().getValue() as the charset encoding 
> which is wrong because content type has mime type as well as charset.
> {code}
> try {
>   NamedList resp =
>   
> client.parser.processResponse(response.getEntity().getContent(),
>   response.getEntity().getContentType().getValue());
>   NamedList error = (NamedList) resp.get("error");
>   if (error != null)
> solrExc.setMetadata((NamedList) 
> error.get("metadata"));
> } catch (Exception exc) {
>   // don't want to fail to report error if parsing the response 
> fails
>   log.warn("Failed to parse error response from " + 
> client.getBaseURL() + " due to: " + exc);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8366) ConcurrentUpdateSolrClient attempts to use response's content type as charset encoding for parsing exceptions

2015-12-04 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-8366:

Summary: ConcurrentUpdateSolrClient attempts to use response's content type 
as charset encoding for parsing exceptions  (was: ConcurrentUpdateSolrClient 
attempts to use response's content type as charset encoding)

> ConcurrentUpdateSolrClient attempts to use response's content type as charset 
> encoding for parsing exceptions
> -
>
> Key: SOLR-8366
> URL: https://issues.apache.org/jira/browse/SOLR-8366
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 5.3, 5.4
>Reporter: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-8336.patch
>
>
> While debugging the SolrExampleStreamingTest.testUpdateField failures on 
> trunk, I noticed that ConcurrentUpdateSolrClient always logs the following 
> when the server throws a conflict error:
> {code}
> WARN  
> (concurrentUpdateScheduler-2-thread-1-processing-http:127.0.0.1:35848//solr//collection1)
>  [] o.a.s.c.s.i.ConcurrentUpdateSolrClient Failed to parse error response 
> from http://127.0.0.1:35848/solr/collection1 due to: 
> org.apache.solr.common.SolrException: parsing error
> {code}
> The problem is the following code which uses the  
> response.getEntity().getContentType().getValue() as the charset encoding 
> which is wrong because content type has mime type as well as charset.
> {code}
> try {
>   NamedList resp =
>   
> client.parser.processResponse(response.getEntity().getContent(),
>   response.getEntity().getContentType().getValue());
>   NamedList error = (NamedList) resp.get("error");
>   if (error != null)
> solrExc.setMetadata((NamedList) 
> error.get("metadata"));
> } catch (Exception exc) {
>   // don't want to fail to report error if parsing the response 
> fails
>   log.warn("Failed to parse error response from " + 
> client.getBaseURL() + " due to: " + exc);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8366) ConcurrentUpdateSolrClient attempts to use response's content type as charset encoding for parsing exceptions

2015-12-04 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-8366.
-
Resolution: Fixed
  Assignee: Shalin Shekhar Mangar

> ConcurrentUpdateSolrClient attempts to use response's content type as charset 
> encoding for parsing exceptions
> -
>
> Key: SOLR-8366
> URL: https://issues.apache.org/jira/browse/SOLR-8366
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 5.3, 5.4
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-8336.patch
>
>
> While debugging the SolrExampleStreamingTest.testUpdateField failures on 
> trunk, I noticed that ConcurrentUpdateSolrClient always logs the following 
> when the server throws a conflict error:
> {code}
> WARN  
> (concurrentUpdateScheduler-2-thread-1-processing-http:127.0.0.1:35848//solr//collection1)
>  [] o.a.s.c.s.i.ConcurrentUpdateSolrClient Failed to parse error response 
> from http://127.0.0.1:35848/solr/collection1 due to: 
> org.apache.solr.common.SolrException: parsing error
> {code}
> The problem is the following code which uses the  
> response.getEntity().getContentType().getValue() as the charset encoding 
> which is wrong because content type has mime type as well as charset.
> {code}
> try {
>   NamedList resp =
>   
> client.parser.processResponse(response.getEntity().getContent(),
>   response.getEntity().getContentType().getValue());
>   NamedList error = (NamedList) resp.get("error");
>   if (error != null)
> solrExc.setMetadata((NamedList) 
> error.get("metadata"));
> } catch (Exception exc) {
>   // don't want to fail to report error if parsing the response 
> fails
>   log.warn("Failed to parse error response from " + 
> client.getBaseURL() + " due to: " + exc);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8366) ConcurrentUpdateSolrClient attempts to use response's content type as charset encoding

2015-12-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041641#comment-15041641
 ] 

ASF subversion and git services commented on SOLR-8366:
---

Commit 1717982 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1717982 ]

SOLR-8366: ConcurrentUpdateSolrClient attempts to use response's content type 
as charset encoding for parsing exception

> ConcurrentUpdateSolrClient attempts to use response's content type as charset 
> encoding
> --
>
> Key: SOLR-8366
> URL: https://issues.apache.org/jira/browse/SOLR-8366
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 5.3, 5.4
>Reporter: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-8336.patch
>
>
> While debugging the SolrExampleStreamingTest.testUpdateField failures on 
> trunk, I noticed that ConcurrentUpdateSolrClient always logs the following 
> when the server throws a conflict error:
> {code}
> WARN  
> (concurrentUpdateScheduler-2-thread-1-processing-http:127.0.0.1:35848//solr//collection1)
>  [] o.a.s.c.s.i.ConcurrentUpdateSolrClient Failed to parse error response 
> from http://127.0.0.1:35848/solr/collection1 due to: 
> org.apache.solr.common.SolrException: parsing error
> {code}
> The problem is the following code which uses the  
> response.getEntity().getContentType().getValue() as the charset encoding 
> which is wrong because content type has mime type as well as charset.
> {code}
> try {
>   NamedList resp =
>   
> client.parser.processResponse(response.getEntity().getContent(),
>   response.getEntity().getContentType().getValue());
>   NamedList error = (NamedList) resp.get("error");
>   if (error != null)
> solrExc.setMetadata((NamedList) 
> error.get("metadata"));
> } catch (Exception exc) {
>   // don't want to fail to report error if parsing the response 
> fails
>   log.warn("Failed to parse error response from " + 
> client.getBaseURL() + " due to: " + exc);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_66) - Build # 15111 - Still Failing!

2015-12-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15111/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:34822/xukg/i/awholynewcollection_0: non 
ok status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:34822/xukg/i/awholynewcollection_0: non ok 
status: 500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([C000E7241320716A:4854D8FEBDDC1C92]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:638)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-8370) Display Similarity Factory in use in Schema-Browser

2015-12-04 Thread JIRA
Jan Høydahl created SOLR-8370:
-

 Summary: Display Similarity Factory in use in Schema-Browser
 Key: SOLR-8370
 URL: https://issues.apache.org/jira/browse/SOLR-8370
 Project: Solr
  Issue Type: Improvement
  Components: UI
Reporter: Jan Høydahl
Priority: Trivial


Perhaps the Admin UI Schema browser should also display which {{}} 
that is in use in schema, like it does per-field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6918) LRUQueryCache.onDocIdSetEviction should not be called when nothing is evicted

2015-12-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041557#comment-15041557
 ] 

ASF subversion and git services commented on LUCENE-6918:
-

Commit 1717963 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1717963 ]

LUCENE-6918: LRUQueryCache.onDocIdSetEviction is only called when at least one 
DocIdSet is being evicted. (Adrien Grand)

> LRUQueryCache.onDocIdSetEviction should not be called when nothing is evicted
> -
>
> Key: LUCENE-6918
> URL: https://issues.apache.org/jira/browse/LUCENE-6918
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: Trunk, 5.5
>
> Attachments: LUCENE-6918.patch
>
>
> This method is confusing because it states it will be called "when one or 
> more DocIdSets are removed from this cache" but may actually be called with 
> zero docidsets when evicting a per-segment cache that did not contain any 
> entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6918) LRUQueryCache.onDocIdSetEviction should not be called when nothing is evicted

2015-12-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6918.
--
   Resolution: Fixed
Fix Version/s: Trunk

> LRUQueryCache.onDocIdSetEviction should not be called when nothing is evicted
> -
>
> Key: LUCENE-6918
> URL: https://issues.apache.org/jira/browse/LUCENE-6918
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: Trunk, 5.5
>
> Attachments: LUCENE-6918.patch
>
>
> This method is confusing because it states it will be called "when one or 
> more DocIdSets are removed from this cache" but may actually be called with 
> zero docidsets when evicting a per-segment cache that did not contain any 
> entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_66) - Build # 5313 - Failure!

2015-12-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5313/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.SyncSliceTest.test

Error Message:
expected:<5> but was:<4>

Stack Trace:
java.lang.AssertionError: expected:<5> but was:<4>
at 
__randomizedtesting.SeedInfo.seed([CFE6A7F7EF19F3B1:47B2982D41E59E49]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at org.apache.solr.cloud.SyncSliceTest.test(SyncSliceTest.java:154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 678 - Failure

2015-12-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/678/

18 tests failed.
FAILED:  org.apache.solr.TestDistributedGrouping.test

Error Message:
IOException occured when talking to server at: 
https://127.0.0.1:54842//collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:54842//collection1
at 
__randomizedtesting.SeedInfo.seed([E01D054449D03B66:68493A9EE72C569E]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:896)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:859)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:874)
at 
org.apache.solr.BaseDistributedSearchTestCase.del(BaseDistributedSearchTestCase.java:545)
at 
org.apache.solr.TestDistributedGrouping.test(TestDistributedGrouping.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:987)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_66) - Build # 14816 - Failure!

2015-12-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14816/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:46245/_ra/b/awholynewcollection_0: non ok 
status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:46245/_ra/b/awholynewcollection_0: non ok 
status: 500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([DE09D37785E10102:565DECAD2B1D6CFA]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:658)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Assigned] (SOLR-6271) ConjunctionSolrSpellChecker wrong check for same string distance

2015-12-04 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer reassigned SOLR-6271:


Assignee: James Dyer

> ConjunctionSolrSpellChecker wrong check for same string distance
> 
>
> Key: SOLR-6271
> URL: https://issues.apache.org/jira/browse/SOLR-6271
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
>Reporter: Igor Kostromin
>Assignee: James Dyer
> Attachments: SOLR-6271.patch
>
>
> See ConjunctionSolrSpellChecker.java
> try {
>   if (stringDistance == null) {
> stringDistance = checker.getStringDistance();
>   } else if (stringDistance != checker.getStringDistance()) {
> throw new IllegalArgumentException(
> "All checkers need to use the same StringDistance.");
>   }
> } catch (UnsupportedOperationException uoe) {
>   // ignore
> }
> In line stringDistance != checker.getStringDistance() there is comparing by 
> references. So if you are using 2 or more spellcheckers with same distance 
> algorithm, exception will be thrown anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6271) ConjunctionSolrSpellChecker wrong check for same string distance

2015-12-04 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-6271:
-
Attachment: SOLR-6271.patch

Here is an updated patch with a slightly different unit test.

This is a trivial fix, but important if we ever implement multiple 
dictionaries:  SOLR-1074 / SOLR-2106 .

> ConjunctionSolrSpellChecker wrong check for same string distance
> 
>
> Key: SOLR-6271
> URL: https://issues.apache.org/jira/browse/SOLR-6271
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
>Reporter: Igor Kostromin
>Assignee: James Dyer
> Attachments: SOLR-6271.patch, SOLR-6271.patch
>
>
> See ConjunctionSolrSpellChecker.java
> try {
>   if (stringDistance == null) {
> stringDistance = checker.getStringDistance();
>   } else if (stringDistance != checker.getStringDistance()) {
> throw new IllegalArgumentException(
> "All checkers need to use the same StringDistance.");
>   }
> } catch (UnsupportedOperationException uoe) {
>   // ignore
> }
> In line stringDistance != checker.getStringDistance() there is comparing by 
> references. So if you are using 2 or more spellcheckers with same distance 
> algorithm, exception will be thrown anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Fix Typo/Bug for AND operation on queryn...

2015-12-04 Thread Lakedaemon
Github user Lakedaemon closed the pull request at:

https://github.com/apache/lucene-solr/pull/34


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6271) ConjunctionSolrSpellChecker wrong check for same string distance

2015-12-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041829#comment-15041829
 ] 

ASF subversion and git services commented on SOLR-6271:
---

Commit 1717999 from jd...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1717999 ]

SOLR-6271: fix StringDistance comparison in CSSC. ( This closes #135 )

> ConjunctionSolrSpellChecker wrong check for same string distance
> 
>
> Key: SOLR-6271
> URL: https://issues.apache.org/jira/browse/SOLR-6271
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
>Reporter: Igor Kostromin
>Assignee: James Dyer
> Attachments: SOLR-6271.patch, SOLR-6271.patch
>
>
> See ConjunctionSolrSpellChecker.java
> try {
>   if (stringDistance == null) {
> stringDistance = checker.getStringDistance();
>   } else if (stringDistance != checker.getStringDistance()) {
> throw new IllegalArgumentException(
> "All checkers need to use the same StringDistance.");
>   }
> } catch (UnsupportedOperationException uoe) {
>   // ignore
> }
> In line stringDistance != checker.getStringDistance() there is comparing by 
> references. So if you are using 2 or more spellcheckers with same distance 
> algorithm, exception will be thrown anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6271) ConjunctionSolrSpellChecker wrong check for same string distance

2015-12-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041831#comment-15041831
 ] 

ASF subversion and git services commented on SOLR-6271:
---

Commit 1718000 from jd...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1718000 ]

SOLR-6271: fix StringDistance comparison in CSSC. ( This closes #135 )

> ConjunctionSolrSpellChecker wrong check for same string distance
> 
>
> Key: SOLR-6271
> URL: https://issues.apache.org/jira/browse/SOLR-6271
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
>Reporter: Igor Kostromin
>Assignee: James Dyer
> Attachments: SOLR-6271.patch, SOLR-6271.patch
>
>
> See ConjunctionSolrSpellChecker.java
> try {
>   if (stringDistance == null) {
> stringDistance = checker.getStringDistance();
>   } else if (stringDistance != checker.getStringDistance()) {
> throw new IllegalArgumentException(
> "All checkers need to use the same StringDistance.");
>   }
> } catch (UnsupportedOperationException uoe) {
>   // ignore
> }
> In line stringDistance != checker.getStringDistance() there is comparing by 
> references. So if you are using 2 or more spellcheckers with same distance 
> algorithm, exception will be thrown anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8368) A SolrCore needs to replay it's tlog before the leader election process.

2015-12-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041855#comment-15041855
 ] 

Mark Miller commented on SOLR-8368:
---

I keep trying to think of reasons we don't have to do this. But I end up 
thinking more about the possible issues if we don't with the current system.

> A SolrCore needs to replay it's tlog before the leader election process.
> 
>
> Key: SOLR-8368
> URL: https://issues.apache.org/jira/browse/SOLR-8368
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>
> If we do it after like now, the correct leader may not be able to become 
> leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8368) A SolrCore needs to replay it's tlog before the leader election process.

2015-12-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041693#comment-15041693
 ] 

Mark Miller commented on SOLR-8368:
---

Of course, replaying a tlog when everything works well should be an exceptional 
case.

> A SolrCore needs to replay it's tlog before the leader election process.
> 
>
> Key: SOLR-8368
> URL: https://issues.apache.org/jira/browse/SOLR-8368
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>
> If we do it after like now, the correct leader may not be able to become 
> leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8364) SpellCheckComponentTest occasionally fails

2015-12-04 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041747#comment-15041747
 ] 

James Dyer commented on SOLR-8364:
--

Ok, looking closely at the log, I see this:

{noformat}
[junit4]   2> 1921118 WARN  
(TEST-SpellCheckComponentTest.test-seed#[110D525A21D16B1]) [] 
o.a.s.c.SolrCore [collection1] PERFORMANCE WARNING: Overlapping 
onDeckSearchers=2
{noformat}

...which seems to say we can remedy this (as suggested) with 
'waitSearcher=true'.  But in digging through TestHarness, what it does is issue 
a  to the core, and then the [reference 
guide|https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers#UploadingDatawithIndexHandlers-XMLFormattedIndexUpdates]
 says that the default here for 'waitSearcher' is already 'true'.  So I am not 
so sure adding it would change anything?

Perhaps we can instead remedy this by adding this to the test's solrconfig.xml?
{code:xml}

  false
  1

{code}

> SpellCheckComponentTest occasionally fails
> --
>
> Key: SOLR-8364
> URL: https://issues.apache.org/jira/browse/SOLR-8364
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 6.0
>Reporter: James Dyer
>Priority: Minor
>
> This failure did not reproduce for me in Linux or Windows with the same seed.
> {quote}
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5439/
> : Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC
> : 
> : 1 tests failed.
> : FAILED:  org.apache.solr.handler.component.SpellCheckComponentTest.test
> : 
> : Error Message:
> : List size mismatch @ spellcheck/suggestions
> : 
> : Stack Trace:
> : java.lang.RuntimeException: List size mismatch @ spellcheck/suggestions
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8371) Try and prevent too many recovery requests from stacking up and clean up some faulty logic.

2015-12-04 Thread Mark Miller (JIRA)
Mark Miller created SOLR-8371:
-

 Summary: Try and prevent too many recovery requests from stacking 
up and clean up some faulty logic.
 Key: SOLR-8371
 URL: https://issues.apache.org/jira/browse/SOLR-8371
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8371) Try and prevent too many recovery requests from stacking up and clean up some faulty logic.

2015-12-04 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8371:
--
Attachment: SOLR-8371.patch

patch ive started playing with attatched

> Try and prevent too many recovery requests from stacking up and clean up some 
> faulty logic.
> ---
>
> Key: SOLR-8371
> URL: https://issues.apache.org/jira/browse/SOLR-8371
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8371.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8368) A SolrCore needs to replay it's tlog before the leader election process.

2015-12-04 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041686#comment-15041686
 ] 

Mike Drob commented on SOLR-8368:
-

This could take a long time, though, right? Is there any danger/downside to not 
having a leader while waiting for replay?

> A SolrCore needs to replay it's tlog before the leader election process.
> 
>
> Key: SOLR-8368
> URL: https://issues.apache.org/jira/browse/SOLR-8368
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>
> If we do it after like now, the correct leader may not be able to become 
> leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6910) fix 2 interesting and 2 trivial issues found by "Coverity scan results of Lucene"

2015-12-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041744#comment-15041744
 ] 

ASF subversion and git services commented on LUCENE-6910:
-

Commit 1717993 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1717993 ]

LUCENE-6910: fix 'if ... > Integer.MAX_VALUE' check in 
(Binary|Numeric)DocValuesFieldUpdates.merge 
(https://scan.coverity.com/projects/5620 CID 119973 and CID 120081)

> fix 2 interesting and 2 trivial issues found by "Coverity scan results of 
> Lucene"
> -
>
> Key: LUCENE-6910
> URL: https://issues.apache.org/jira/browse/LUCENE-6910
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6910.patch, LUCENE-6910.patch
>
>
> https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> * coverity CID 119973
> * coverity CID 120040
> * coverity CID 120081
> * coverity CID 120628



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8368) A SolrCore needs to replay it's tlog before the leader election process.

2015-12-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041691#comment-15041691
 ] 

Mark Miller commented on SOLR-8368:
---

It could take a long time, but it's simply necessary for the architecture to 
prevent data loss. There is plenty of downside in it taking a long time.

> A SolrCore needs to replay it's tlog before the leader election process.
> 
>
> Key: SOLR-8368
> URL: https://issues.apache.org/jira/browse/SOLR-8368
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>
> If we do it after like now, the correct leader may not be able to become 
> leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: A 5.4 release?

2015-12-04 Thread Upayavira
Thanks Varun.

As a first time Release Manager, I'm currently working through the steps
needed to build a release. I'll call a vote as soon as I have uploaded
artifacts that are passing smoke tests.

Upayavira

On Fri, Dec 4, 2015, at 07:16 AM, Varun Thacker wrote:
> Hi Upayavira,
>
> I have committed a fix for SOLR-8363 . You should be good to go.
>
> On Thu, Dec 3, 2015 at 8:00 PM, Upayavira  wrote:
>> Agreed. Waiting on SOLR-8363 right now.
>>
>>
On Thu, Dec 3, 2015, at 02:14 PM, Shalin Shekhar Mangar wrote:
>>
> Agreed, what's done is done but in future, let's avoid shoving
>>
> anything other than critical bug fixes into release branches.
>>
>
>>
> On Thu, Dec 3, 2015 at 7:42 PM, Robert Muir  wrote:
>>
> > A really strong reason would be to not shove last minute
> > changes right
>>
> > before releases.
>>
> >
>>
> > On Thu, Dec 3, 2015 at 9:08 AM, Anshum Gupta
> >  wrote:
>>
> >> I'll admit I committed it thinking it was safe and didn't bring any
>>
> >> flakiness into the system but it's not something critical. I'll
> >> let the
>>
> >> release manager decide if it's fine being released with 5.4. If he
> >> thinks
>>
> >> otherwise (or someone else has a really strong reason) we can
> >> roll this
>>
> >> back, though I don't really see a reason unless we see broken
> >> builds due to
>>
> >> this commit.
>>
> >>
>>
> >> On Thu, Dec 3, 2015 at 3:07 PM, Adrien Grand 
> >> wrote:
>>
> >>>
>>
> >>> SOLR-8330 is not critical so I don't think it should have been
> >>> committed
>>
> >>> to the 5.4 branch. This gives CI too little time to find problems
> >>> before
>>
> >>> Upayavira cuts a release candidate.
>>
> >>>
>>
> >>> Le jeu. 3 déc. 2015 à 03:34, Anshum Gupta 
> >>> a écrit
>>
> >>> :
>>
> 
>>
>  Hi,
>>
> 
>>
>  I'd like to get SOLR-8330 in for 5.4. I'm currently merging and
>  running
>>
>  tests so let me know if I shouldn't be merging this in.
>>
> 
>>
> 
>>
>  On Thu, Nov 26, 2015 at 11:00 PM, Upayavira 
>  wrote:
>>
> >
>>
> > Thanks to Steve and Uwe, we now have both ASF and Policeman
> > Jenkins
>>
> > pointing at the 5.4 branch.
>>
> >
>>
> > Upayavira
>>
> >
>>
> > On Thu, Nov 26, 2015, at 04:10 PM, Upayavira wrote:
>>
> > > thx :-)
>>
> > >
>>
> > > On Thu, Nov 26, 2015, at 04:07 PM, Noble Paul wrote:
>>
> > > > OK . So I need to commit my fixes there. I missed the branch
>>
> > > > creation
>>
> > > > mail
>>
> > > >
>>
> > > >
>>
> > > > On Thu, Nov 26, 2015 at 9:34 PM, Noble Paul
> > > > 
>>
> > > > wrote:
>>
> > > > > @Upayavira is there a branch created for 5.4 already. I
> > > > > see one
>>
> > > > > already
>>
> > > > >
>>
> > > > > On Thu, Nov 26, 2015 at 2:08 AM, Erick Erickson
>>
> > > > >  wrote:
>>
> > > > >> Do note that this is the Thanksgiving holiday here in the
> > > > >> US,
>>
> > > > >> lots of
>>
> > > > >> people are out for the week. Mostly FYI, just don't be
> > > > >> surprised
>>
> > > > >> if
>>
> > > > >> you get more traffic on this starting next week ;)
>>
> > > > >>
>>
> > > > >> On Wed, Nov 25, 2015 at 11:35 AM, Timothy Potter
>>
> > > > >>  wrote:
>>
> > > > >>> Ok, those fixes are in 5.4 now, thanks!
>>
> > > > >>>
>>
> > > > >>> On Wed, Nov 25, 2015 at 9:49 AM, Upayavira
> > > > >>> 
>>
> > > > >>> wrote:
>>
> > > >  I'm for one am okay with these going into 5.4.
>>
> > > > 
>>
> > > >  Upayavira
>>
> > > > 
>>
> > > >  On Wed, Nov 25, 2015, at 05:28 PM, Timothy Potter
> > > >  wrote:
>>
> > > > > I would like to put SOLR-7169 (also fixes 8267) and
> > > > > SOLR-8101
>>
> > > > > into
>>
> > > > > 5.4. I'll commit to trunk and 5x today ... let me know
> > > > >  if
>>
> > > > > there are
>>
> > > > > any objections to also including in 5.4 branch
>>
> > > > >
>>
> > > > > Tim
>>
> > > > >
>>
> > > > > On Wed, Nov 25, 2015 at 6:05 AM, Upayavira
> > > > > 
>>
> > > > > wrote:
>>
> > > > > > I shall shortly create the 5.4 release branch. From
> > > > > > this
>>
> > > > > > moment, the feature
>>
> > > > > > freeze starts.
>>
> > > > > >
>>
> > > > > > Looking through JIRA, I see some 71 tickets assigned
> > > > > > to fix
>>
> > > > > > version 5.4. I
>>
> > > > > > suspect we won't be able to fix all 71 in one week,
> > > > > > so I
>>
> > > > > > expect that the
>>
> > > > > > majority will be pushed, after this release, to 5.5.
>>
> > > > > >
>>
> > > > > > Looking 

[jira] [Created] (SOLR-8372) Canceled recovery can lead to data loss

2015-12-04 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-8372:
--

 Summary: Canceled recovery can lead to data loss
 Key: SOLR-8372
 URL: https://issues.apache.org/jira/browse/SOLR-8372
 Project: Solr
  Issue Type: Bug
Reporter: Yonik Seeley


A recovery via index replication tells the update log to start buffering 
updates.  If that recovery is canceled for whatever reason by the replica, the 
RecoveryStrategy calls ulog.dropBufferedUpdates() which stops buffering and 
places the UpdateLog back in active mode.  If updates come from the leader 
after this point (and before ReplicationStrategy retries recovery), the update 
will be processed as normal and added to the transaction log. If the server is 
bounced, those last updates to the transaction log look normal (no FLAG_GAP) 
and can be used to determine who is more up to date. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: A 5.4 release?

2015-12-04 Thread Michael McCandless
On Fri, Dec 4, 2015 at 2:29 PM, Upayavira  wrote:
>
> On Fri, Dec 4, 2015, at 06:10 PM, Michael McCandless wrote:
>>
>> On Fri, Dec 4, 2015 at 11:34 AM, Upayavira  wrote:
>>
>> > As a first time Release Manager
>>
>> Thanks Upayavira!
>>
>> But please, please, please take advantage of your newness to this, to
>> edit https://wiki.apache.org/lucene-java/ReleaseTodo when things are
>> confusing/missing/etc.!
>>
>> Being new to something is unfortunately rare and we all quickly become
>> "release blind" after doing a couple releases.
>
> Okay - will do.
>
> 1. Make sure your key is up on the pgp.mit.edu before running the build.
> 2. Make sure you are using ant 1.8, not ant 1.9.
> 3. To be found out shortly... :-)

Thanks Upayavira: your newness is paying off already!  Just be sure to
edit the wiki accordingly :)

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Scott Blum
Ouch... not having an official mirror would be a huge burden on those of us
managing org-specific forks. :(

On Fri, Dec 4, 2015 at 3:57 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> Hello devs,
>
> The infra team has notified us (Lucene/Solr) that in 26 days our
> git-svn mirror will be turned off, because running it consumes too
> many system resources, affecting other projects, apparently because of
> a memory leak in git-svn.
>
> Does anyone know of a link to this git-svn issue?  Is it a known
> issue?  If there's something simple we can do (remove old jars from
> our svn history, remove old branches), maybe we can sidestep the issue
> and infra will allow it to keep running?
>
> Or maybe someone in the Lucene/Solr dev community with prior
> experience with git-svn could volunteer to play with it to see if
> there's a viable solution, maybe with command-line options e.g. to
> only mirror specific branches (trunk, 5.x)?
>
> Or maybe it's time for us to switch to git, but there are problems
> there too, e.g. we are currently missing large parts of our svn
> history from the mirror now and it's not clear whether that would be
> fixed if we switched:
> https://issues.apache.org/jira/browse/INFRA-10828  Also, because we
> used to add JAR files to svn, the "git clone" would likely take
> several GBs unless we remove those JARs from our history.
>
> Or if anyone has any other ideas, we should explore them, because
> otherwise in 26 days there will be no more updates to the git mirror
> of Lucene and Solr sources...
>
> Thanks,
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Dawid Weiss
> I don't think jar files are 'history' and it was a mistake we had so
> many in source control before we cleaned that up. it is much better
> without them.

Depends how you look at it. If your goal is to be able to actually
build ancient versions then dropping those JARs is going to be a real
pain. I think they should stay. Like I said, git is smart enough to
omit objects that aren't referenced from the cloned branch. The
conversion from SVN would have to be smart, but it's all doable.

> this bloats the repository, makes clone slow for someone new who just
> wants to check it out to work on it, etc.

No, not really. There is a dozen ways to do it without cloning the
full repo (provide a patch with --depth 1, clone a selective branch,
etc.). We've had that discussion before. I know you won't accept
rational arguments. :)

D.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Gus Heck
If we moved to git would a read only svn for older versions still exist? If
so no reason to keep any jars at all in git.
On Dec 4, 2015 4:22 PM, "Robert Muir"  wrote:

> On Fri, Dec 4, 2015 at 4:14 PM, Dawid Weiss  wrote:
> >> [...] several GBs unless we remove those JARs from our history.
> >
> > 1) History is important, don't dump it.
>
> I don't think jar files are 'history' and it was a mistake we had so
> many in source control before we cleaned that up. it is much better
> without them.
>
> this bloats the repository, makes clone slow for someone new who just
> wants to check it out to work on it, etc.
>
> I wouldn't be surprised if it contributes to the system resources
> issue at hand: which impacts *real history*
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Mike Drob
> Does anyone know of a link to this git-svn issue?  Is it a known
issue?  If there's something simple we can do (remove old jars from
our svn history, remove old branches), maybe we can sidestep the issue
and infra will allow it to keep running?

I believe it is partially covered under
https://issues.apache.org/jira/browse/INFRA-9182

On Fri, Dec 4, 2015 at 2:57 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:

>
>


[jira] [Updated] (SOLR-7304) Spellcheck.collate Sometimes Invalidates Range Queries

2015-12-04 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-7304:
-
Attachment: SOLR-7304.patch

Here is a patch with the fix.  I will commit this next week if everything 
checks out ok.

> Spellcheck.collate Sometimes Invalidates Range Queries
> --
>
> Key: SOLR-7304
> URL: https://issues.apache.org/jira/browse/SOLR-7304
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
> Environment: Jetty
> Debian
>Reporter: Hakim
>Priority: Minor
>  Labels: range, spellchecker
> Fix For: 4.9
>
> Attachments: SOLR-7304.patch, SOLR-7304.patch
>
>
> I have an error with SpellCheckComponent since I have added this 
> SearchComponent to /select RequestHandler (see solrconfig.xml).
>   
> 
>  
>explicit
>10
>titre
> 
>on
>default
>true
>3
>3
>5
>true
>true
>10
>1
>false
>false
>  
> The error seems to be related to range queries, with the [.. to ..] written 
> in lowercase. The query performed by the SpellCheck component using 'to' in 
> lower case throws the RANGE_GOOP error.
> 101615 [qtp2145626092-38] WARN  org.apache.solr.spelling.SpellCheckCollator  
> - Exception trying to re-query to check if a spell check possibility would 
> return any hits.
> org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError: 
> Cannot parse 'offredemande:offre AND categorieparente:"audi" AND 
> prix:[216 to 2250008} AND anneemodele:[2003 to 2008} AND etat:"nauf"': 
> Encountered "  "2250008 "" at line 1, column 68.
> Was expecting one of:
> "]" ...
> "}" ...
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:205)
> at 
> org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:141)
> at 
> org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:230)
> at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:197)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1962)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1645)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:564)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:578)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:498)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1045)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:199)
> at 
> org.eclipse.jetty.server.handler.IPAccessHandler.handle(IPAccessHandler.java:220)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:109)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:98)
> at org.eclipse.jetty.server.Server.handle(Server.java:461)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:284)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:534)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.solr.search.SyntaxError: Cannot parse 
> 'offredemande:offre AND 

[jira] [Comment Edited] (SOLR-7304) Spellcheck.collate Sometimes Invalidates Range Queries

2015-12-04 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042237#comment-15042237
 ] 

James Dyer edited comment on SOLR-7304 at 12/4/15 9:06 PM:
---

Attached is a patch with a failing unit test.  To reproduce this issue we use 
"spellcheck.alternativeTermCount" while having the word "to" in the index.  We 
also use a "queryAnalyzerFieldType" that performs lowercasing.

The test case queries:
bq. id:[1 TO 10] AND lowerfilt:lovw
And expects back:
bq. id:[1 TO 10] AND lowerfilt:love
But instead gets:
bq. id:[1 to 10] AND lowerfilt:love

Both "to" and "and" are in the index.  However, SpellingQueryConverter treats 
the boolean AND/OR/NOT operators special.  I think the easiest fix here is to 
have S.Q.C. also treat "TO" special, at least in cases where it occurs somewhat 
after [ or { and somewhat before ] or }.



was (Author: jdyer):
Attached is a patch with a failing unit test.  To reproduce this issue we use 
"spellcheck.alternativeTermCount" while having the word "to" in the index.  We 
also use a "queryAnalyzerFieldType" that performs lowercasing.

The test case queries:
bq. id:[1 TO 10] AND lowerfilt:lovw
And expects back:
bq. id:[1 TO 10] AND lowerfilt:love
But instead gets:
id:[1 to 10] AND lowerfilt:love

Both "to" and "and" are in the index.  However, SpellingQueryConverter treats 
the boolean AND/OR/NOT operators special.  I think the easiest fix here is to 
have S.Q.C. also treat "TO" special, at least in cases where it occurs somewhat 
after [ or { and somewhat before ] or }.


> Spellcheck.collate Sometimes Invalidates Range Queries
> --
>
> Key: SOLR-7304
> URL: https://issues.apache.org/jira/browse/SOLR-7304
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
> Environment: Jetty
> Debian
>Reporter: Hakim
>Priority: Minor
>  Labels: range, spellchecker
> Fix For: 4.9
>
> Attachments: SOLR-7304.patch
>
>
> I have an error with SpellCheckComponent since I have added this 
> SearchComponent to /select RequestHandler (see solrconfig.xml).
>   
> 
>  
>explicit
>10
>titre
> 
>on
>default
>true
>3
>3
>5
>true
>true
>10
>1
>false
>false
>  
> The error seems to be related to range queries, with the [.. to ..] written 
> in lowercase. The query performed by the SpellCheck component using 'to' in 
> lower case throws the RANGE_GOOP error.
> 101615 [qtp2145626092-38] WARN  org.apache.solr.spelling.SpellCheckCollator  
> - Exception trying to re-query to check if a spell check possibility would 
> return any hits.
> org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError: 
> Cannot parse 'offredemande:offre AND categorieparente:"audi" AND 
> prix:[216 to 2250008} AND anneemodele:[2003 to 2008} AND etat:"nauf"': 
> Encountered "  "2250008 "" at line 1, column 68.
> Was expecting one of:
> "]" ...
> "}" ...
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:205)
> at 
> org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:141)
> at 
> org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:230)
> at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:197)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1962)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1645)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:564)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:578)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:498)
> at 
> 

[jira] [Updated] (SOLR-7304) Spellcheck.collate Sometimes Invalidates Range Queries

2015-12-04 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-7304:
-
Attachment: SOLR-7304.patch

Attached is a patch with a failing unit test.  To reproduce this issue we use 
"spellcheck.alternativeTermCount" while having the word "to" in the index.  We 
also use a "queryAnalyzerFieldType" that performs lowercasing.

The test case queries:
bq. id:[1 TO 10] AND lowerfilt:lovw
And expects back:
bq. id:[1 TO 10] AND lowerfilt:love
But instead gets:
id:[1 to 10] AND lowerfilt:love

Both "to" and "and" are in the index.  However, SpellingQueryConverter treats 
the boolean AND/OR/NOT operators special.  I think the easiest fix here is to 
have S.Q.C. also treat "TO" special, at least in cases where it occurs somewhat 
after [ or { and somewhat before ] or }.


> Spellcheck.collate Sometimes Invalidates Range Queries
> --
>
> Key: SOLR-7304
> URL: https://issues.apache.org/jira/browse/SOLR-7304
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
> Environment: Jetty
> Debian
>Reporter: Hakim
>Priority: Minor
>  Labels: range, spellchecker
> Fix For: 4.9
>
> Attachments: SOLR-7304.patch
>
>
> I have an error with SpellCheckComponent since I have added this 
> SearchComponent to /select RequestHandler (see solrconfig.xml).
>   
> 
>  
>explicit
>10
>titre
> 
>on
>default
>true
>3
>3
>5
>true
>true
>10
>1
>false
>false
>  
> The error seems to be related to range queries, with the [.. to ..] written 
> in lowercase. The query performed by the SpellCheck component using 'to' in 
> lower case throws the RANGE_GOOP error.
> 101615 [qtp2145626092-38] WARN  org.apache.solr.spelling.SpellCheckCollator  
> - Exception trying to re-query to check if a spell check possibility would 
> return any hits.
> org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError: 
> Cannot parse 'offredemande:offre AND categorieparente:"audi" AND 
> prix:[216 to 2250008} AND anneemodele:[2003 to 2008} AND etat:"nauf"': 
> Encountered "  "2250008 "" at line 1, column 68.
> Was expecting one of:
> "]" ...
> "}" ...
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:205)
> at 
> org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:141)
> at 
> org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:230)
> at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:197)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1962)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1645)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:564)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:578)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:498)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1045)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:199)
> at 
> org.eclipse.jetty.server.handler.IPAccessHandler.handle(IPAccessHandler.java:220)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:109)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:98)
> at org.eclipse.jetty.server.Server.handle(Server.java:461)
> at 

[jira] [Updated] (SOLR-8292) TransactionLog.next() does not honor contract and return null for EOF

2015-12-04 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-8292:

Attachment: SOLR-8292.patch

Here's the start of a patch to get better logging around what is happening.

I think the intent of the "return null for EOF" was to produce a null after the 
last complete record had been read. A easily checked "we're done" marker.

In the cases where it actually throws an EOF, I think there must be some 
truncation and a corrupt tlog file where it fails in the middle of a record.

> TransactionLog.next() does not honor contract and return null for EOF
> -
>
> Key: SOLR-8292
> URL: https://issues.apache.org/jira/browse/SOLR-8292
> Project: Solr
>  Issue Type: Bug
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-8292.patch
>
>
> This came to light in CDCR testing, which stresses this code a lot, there's a 
> stack trace showing this line (641 trunk) throwing an EOF exception:
> o = codec.readVal(fis);
> At first I thought to just wrap reading fis in a try/catch and return null, 
> but looking at the code a bit more I'm not so sure, that seems like it'd mask 
> what looks at first glance like a bug in the logic.
> A few lines earlier (633-4) there's these lines:
> // shouldn't currently happen - header and first record are currently written 
> at the same time
> if (fis.position() >= fos.size()) {
> Why are we comparing the the input file position against the size of the 
> output file? Maybe because the 'i' key is right next to the 'o' key? The 
> comment hints that it's checking for the ability to read the first record in 
> input stream along with the header. And perhaps there's a different issue 
> here because the expectation clearly is that the first record should be there 
> if the header is.
> So what's the right thing to do? Wrap in a try/catch and return null for EOF? 
> Change the test? Do both?
> I can take care of either, but wanted a clue whether the comparison of fis to 
> fos is intended.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Robert Muir
On Fri, Dec 4, 2015 at 4:14 PM, Dawid Weiss  wrote:
>> [...] several GBs unless we remove those JARs from our history.
>
> 1) History is important, don't dump it.

I don't think jar files are 'history' and it was a mistake we had so
many in source control before we cleaned that up. it is much better
without them.

this bloats the repository, makes clone slow for someone new who just
wants to check it out to work on it, etc.

I wouldn't be surprised if it contributes to the system resources
issue at hand: which impacts *real history*

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8373) KerberosPlugin: Using multiple nodes on same machine leads clients to fetch TGT for every request

2015-12-04 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-8373:
---
Attachment: SOLR-8373.patch

I'm testing this patch that lets the clients ignore the cookies when talking to 
the kerberized Solr nodes.

> KerberosPlugin: Using multiple nodes on same machine leads clients to fetch 
> TGT for every request
> -
>
> Key: SOLR-8373
> URL: https://issues.apache.org/jira/browse/SOLR-8373
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Priority: Critical
> Attachments: SOLR-8373.patch
>
>
> Kerberized solr nodes accept negotiate/spnego/kerberos requests and processes 
> them. It also passes back to the client a cookie called "hadoop.auth" (which 
> is currently unused, but will eventually be used for delegation tokens). 
> If two or more nodes are on the same machine, they all send out the cookie 
> which have the same domain (hostname) and same path, but different cookie 
> values.
> Upon receipt at the client, if a cookie is rejected (which in this case will 
> be), the client compulsorily gets a ​​*new*​​ TGT from the KDC instead of 
> reading the same ticket from the ticketcache. This is causing the heavy 
> traffic at the KDC, plus intermittent "Request is a replay" (which indicates 
> race condition at KDC while handing out the TGT for the same principal).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Dawid Weiss
It'd be cool to actually reintegrate ancient CVS history as well (I
think not all of it was moved to SVN).

https://sourceforge.net/projects/lucene/

D.

On Fri, Dec 4, 2015 at 10:30 PM, Upayavira  wrote:
> Even if we moved to git and did an svn rm on
> https://svn.apache.org/repos/asf/lucene/dev, the entire history of Lucene
> would remain in the ASF Subversion repository. Nothing we can do to prevent
> that!!
>
> Upayavira
>
> On Fri, Dec 4, 2015, at 09:26 PM, Gus Heck wrote:
>
> If we moved to git would a read only svn for older versions still exist? If
> so no reason to keep any jars at all in git.
>
> On Dec 4, 2015 4:22 PM, "Robert Muir"  wrote:
>
> On Fri, Dec 4, 2015 at 4:14 PM, Dawid Weiss  wrote:
>>> [...] several GBs unless we remove those JARs from our history.
>>
>> 1) History is important, don't dump it.
>
> I don't think jar files are 'history' and it was a mistake we had so
> many in source control before we cleaned that up. it is much better
> without them.
>
> this bloats the repository, makes clone slow for someone new who just
> wants to check it out to work on it, etc.
>
> I wouldn't be surprised if it contributes to the system resources
> issue at hand: which impacts *real history*
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Robert Muir
On Fri, Dec 4, 2015 at 4:25 PM, Dawid Weiss  wrote:
>> I don't think jar files are 'history' and it was a mistake we had so
>> many in source control before we cleaned that up. it is much better
>> without them.
>
> Depends how you look at it. If your goal is to be able to actually
> build ancient versions then dropping those JARs is going to be a real
> pain. I think they should stay. Like I said, git is smart enough to
> omit objects that aren't referenced from the cloned branch. The
> conversion from SVN would have to be smart, but it's all doable.

I mentioned this same issue the last thread where we discussed that, I
do recommend to try to actually compile these old versions.

As an experiment, I checked out the release tag for 4.2
(http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_2_0)
and ran 'ant compile'

BUILD FAILED
/home/rmuir/lucene_solr_4_2_0/build.xml:107: The following error
occurred while executing this line:
/home/rmuir/lucene_solr_4_2_0/lucene/common-build.xml:656: The
following error occurred while executing this line:
/home/rmuir/lucene_solr_4_2_0/lucene/common-build.xml:479: The
following error occurred while executing this line:
/home/rmuir/lucene_solr_4_2_0/lucene/common-build.xml:1578: Class not
found: javac1.8

That release was only 2 years ago, and its not the only problem you
will hit. Besides build issues and stuff, I know at least Solr had a
wildcard import, conflicting with the newly introduced
java.util.Base64 that will prevent its compile. And I feel like there
have been numerous sneaky generics issues that only Uwe seems to
understand.

Being able to build the old versions would require a good effort just
to figure out what build tools / compiler versions you need to do it
for the different timeframes, and git hashes aren't great if you want
to document that or try to make some fancy bisection tool.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Alexandre Rafalovitch
Maybe a silly question, but has anybody actually looked into the
git-svn itself. E.g. talking to git-svn team with our example to help
them troubleshoot the link. Or run a test sync under profiler.
Also, it is running into OOM, but how big is a system doing the sync.
If the issue is upgrading the server from 8gb of memory to 16gb, this
might be an easier/cheaper course that moving the whole infrastructure
around. I am sure Lucidworks or Elastic could probably sponsor a
couple hundred bucks for memory upgrade if that turned out to be the
real problem. :-)

Reading JIRA, I get a feeling that this problem with git-svn is mostly
treated as a blackbox. It feels like there might be other options.

Regards,
   Alex.

On 4 December 2015 at 15:57, Michael McCandless
 wrote:
> The infra team has notified us (Lucene/Solr) that in 26 days our
> git-svn mirror will be turned off, because running it consumes too
> many system resources, affecting other projects, apparently because of
> a memory leak in git-svn.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Dawid Weiss
> [...] several GBs unless we remove those JARs from our history.

1) History is important, don't dump it.
2) git isn't dumb -- git clone -b master --single-branch would only
fetch what's actually needed/ referenced. We could split the history
into "pre-ivy" and "post-ivy" branches so that fetching master is at
nearly no-cost, but if somebody wishes to they can still fetch
everything (I would, it's a one-time thing, typically).

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2015-12-04 Thread Peter Ciuffetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042147#comment-15042147
 ] 

Peter Ciuffetti commented on SOLR-7495:
---

The multi value work around is not available to me because I also need the 
fields affected by this bug to be sortable.  And the string work around would 
only give me sortable values if the int's were fixed width (some of my int 
fields are, some are not).

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
> Attachments: SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> 

[jira] [Updated] (SOLR-7304) Spellcheck.collate Sometimes Invalidates Range Queries

2015-12-04 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-7304:
-
Summary: Spellcheck.collate Sometimes Invalidates Range Queries  (was: 
SyntaxError in SpellcheckComponent)

> Spellcheck.collate Sometimes Invalidates Range Queries
> --
>
> Key: SOLR-7304
> URL: https://issues.apache.org/jira/browse/SOLR-7304
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
> Environment: Jetty
> Debian
>Reporter: Hakim
>Priority: Minor
>  Labels: range, spellchecker
> Fix For: 4.9
>
>
> I have an error with SpellCheckComponent since I have added this 
> SearchComponent to /select RequestHandler (see solrconfig.xml).
>   
> 
>  
>explicit
>10
>titre
> 
>on
>default
>true
>3
>3
>5
>true
>true
>10
>1
>false
>false
>  
> The error seems to be related to range queries, with the [.. to ..] written 
> in lowercase. The query performed by the SpellCheck component using 'to' in 
> lower case throws the RANGE_GOOP error.
> 101615 [qtp2145626092-38] WARN  org.apache.solr.spelling.SpellCheckCollator  
> - Exception trying to re-query to check if a spell check possibility would 
> return any hits.
> org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError: 
> Cannot parse 'offredemande:offre AND categorieparente:"audi" AND 
> prix:[216 to 2250008} AND anneemodele:[2003 to 2008} AND etat:"nauf"': 
> Encountered "  "2250008 "" at line 1, column 68.
> Was expecting one of:
> "]" ...
> "}" ...
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:205)
> at 
> org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:141)
> at 
> org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:230)
> at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:197)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1962)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1645)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:564)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:578)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:498)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1045)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:199)
> at 
> org.eclipse.jetty.server.handler.IPAccessHandler.handle(IPAccessHandler.java:220)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:109)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:98)
> at org.eclipse.jetty.server.Server.handle(Server.java:461)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:284)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:534)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.solr.search.SyntaxError: Cannot parse 
> 'offredemande:offre AND categorieparente:"audi" AND prix:[216 to 2250008} 
> AND anneemodele:[2003 to 

[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_66) - Build # 14818 - Failure!

2015-12-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14818/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Fri Dec 04 16:04:34 
AST 2015

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Fri Dec 04 16:04:34 AST 2015
at 
__randomizedtesting.SeedInfo.seed([2DF1BCDD852FECF6:F65ABC1B80078545]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1419)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:771)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10214 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4] 

Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Upayavira
Even if we moved to git and did an svn rm on 
https://svn.apache.org/repos/asf/lucene/dev, the entire history of Lucene would 
remain in the ASF Subversion repository. Nothing we can do to prevent that!!

Upayavira

On Fri, Dec 4, 2015, at 09:26 PM, Gus Heck wrote:
> If we moved to git would a read only svn for older versions still
> exist? If so no reason to keep any jars at all in git.


> On Dec 4, 2015 4:22 PM, "Robert Muir"  wrote:
>> On Fri, Dec 4, 2015 at 4:14 PM, Dawid Weiss
>>  wrote:
>>
>> [...] several GBs unless we remove those JARs from our history.
>>
>
>>
> 1) History is important, don't dump it.
>>
>>
I don't think jar files are 'history' and it was a mistake we had so
>>
many in source control before we cleaned that up. it is much better
>>
without them.
>>
>>
this bloats the repository, makes clone slow for someone new who just
>>
wants to check it out to work on it, etc.
>>
>>
I wouldn't be surprised if it contributes to the system resources
>>
issue at hand: which impacts *real history*
>>
>>
-
>>
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>
For additional commands, e-mail: dev-h...@lucene.apache.org
>>


[jira] [Commented] (LUCENE-6919) Change the Scorer API to expose an iterator instead of extending DocIdSetIterator

2015-12-04 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042367#comment-15042367
 ] 

Ryan Ernst commented on LUCENE-6919:


+1 to the idea

> Change the Scorer API to expose an iterator instead of extending 
> DocIdSetIterator
> -
>
> Key: LUCENE-6919
> URL: https://issues.apache.org/jira/browse/LUCENE-6919
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6919.patch
>
>
> I was working on trying to address the performance regression on LUCENE-6815 
> but this is hard to do without introducing specialization of 
> DisjunctionScorer which I'd like to avoid at all costs.
> I think the performance regression would be easy to address without 
> specialization if Scorers were changed to return an iterator instead of 
> extending DocIdSetIterator. So conceptually the API would move from
> {code}
> class Scorer extends DocIdSetIterator {
> }
> {code}
> to
> {code}
> class Scorer {
>   DocIdSetIterator iterator();
> }
> {code}
> This would help me because then if none of the sub clauses support two-phase 
> iteration, DisjunctionScorer could directly return the approximation as an 
> iterator instead of having to check if twoPhase == null at every iteration.
> Such an approach could also help remove some method calls. For instance 
> TermScorer.nextDoc calls PostingsEnum.nextDoc but with this change 
> TermScorer.iterator() could return the PostingsEnum and TermScorer would not 
> even appear in stack traces when scoring. I hacked a patch to see how much 
> that would help and luceneutil seems to like the change:
> {noformat}
> TaskQPS baseline  StdDev   QPS patch  StdDev  
>   Pct diff
>   Fuzzy1   88.54 (15.7%)   86.73 (16.6%)   
> -2.0% ( -29% -   35%)
>   AndHighLow  698.98  (4.1%)  691.11  (5.1%)   
> -1.1% (  -9% -8%)
>   Fuzzy2   26.47 (11.2%)   26.28 (10.3%)   
> -0.7% ( -19% -   23%)
>  MedSpanNear  141.03  (3.3%)  140.51  (3.2%)   
> -0.4% (  -6% -6%)
>   HighPhrase   60.66  (2.6%)   60.48  (3.3%)   
> -0.3% (  -5% -5%)
>  LowSpanNear   29.25  (2.4%)   29.21  (2.1%)   
> -0.1% (  -4% -4%)
>MedPhrase   28.32  (1.9%)   28.28  (2.0%)   
> -0.1% (  -3% -3%)
>LowPhrase   17.31  (2.1%)   17.29  (2.6%)   
> -0.1% (  -4% -4%)
> HighSloppyPhrase   10.93  (6.0%)   10.92  (6.0%)   
> -0.1% ( -11% -   12%)
>  MedSloppyPhrase   72.21  (2.2%)   72.27  (1.8%)
> 0.1% (  -3% -4%)
>  Respell   57.35  (3.2%)   57.41  (3.4%)
> 0.1% (  -6% -6%)
> HighSpanNear   26.71  (3.0%)   26.75  (2.5%)
> 0.1% (  -5% -5%)
> OrNotHighLow  803.46  (3.4%)  807.03  (4.2%)
> 0.4% (  -6% -8%)
>  LowSloppyPhrase   88.02  (3.4%)   88.77  (2.5%)
> 0.8% (  -4% -7%)
> OrNotHighMed  200.45  (2.7%)  203.83  (2.5%)
> 1.7% (  -3% -7%)
>   OrHighHigh   38.98  (7.9%)   40.30  (6.6%)
> 3.4% ( -10% -   19%)
> HighTerm   92.53  (5.3%)   95.94  (5.8%)
> 3.7% (  -7% -   15%)
>OrHighMed   53.80  (7.7%)   55.79  (6.6%)
> 3.7% (  -9% -   19%)
>   AndHighMed  266.69  (1.7%)  277.15  (2.5%)
> 3.9% (   0% -8%)
>  Prefix3   44.68  (5.4%)   46.60  (7.0%)
> 4.3% (  -7% -   17%)
>  MedTerm  261.52  (4.9%)  273.52  (5.4%)
> 4.6% (  -5% -   15%)
> Wildcard   42.39  (6.1%)   44.35  (7.8%)
> 4.6% (  -8% -   19%)
>   IntNRQ   10.46  (7.0%)   10.99  (9.5%)
> 5.0% ( -10% -   23%)
>OrNotHighHigh   67.15  (4.6%)   70.65  (4.5%)
> 5.2% (  -3% -   15%)
>OrHighNotHigh   43.07  (5.1%)   45.36  (5.4%)
> 5.3% (  -4% -   16%)
>OrHighLow   64.19  (6.4%)   67.72  (5.5%)
> 5.5% (  -6% -   18%)
>  AndHighHigh   64.17  (2.3%)   67.87  (2.1%)
> 5.8% (   1% -   10%)
>  LowTerm  642.94 (10.9%)  681.48  (8.5%)
> 6.0% ( -12% -   28%)
> OrHighNotMed   12.68  (6.9%)   13.51  (6.6%)
> 6.5% (  -6% -   21%)
> 

[jira] [Created] (SOLR-8374) Issue with _text_ type in schema file

2015-12-04 Thread Romit Singhai (JIRA)
Romit Singhai created SOLR-8374:
---

 Summary: Issue with _text_ type in schema file
 Key: SOLR-8374
 URL: https://issues.apache.org/jira/browse/SOLR-8374
 Project: Solr
  Issue Type: Bug
  Components: Hadoop Integration
Affects Versions: 5.2.1
Reporter: Romit Singhai
Priority: Critical


In the data_driven_schema_configs, the warning say that _text_ field  can be 
removed if not needed. The  hadoop indexer fails  to index data  as ping 
command could not find the collection required for indexing.

The ping command for collection needs to be fixed (making _text_ optional) as 
_text_ add significantly to index size even if not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-3341) Spellcheker is not checking word with less than 3 characters

2015-12-04 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer closed LUCENE-3341.
--
Resolution: Not A Problem

We can do what the user wants, using DirectSolrSpellChecker and setting the 
"minQueryLength" parameter.

> Spellcheker is not checking word with less than 3 characters
> 
>
> Key: LUCENE-3341
> URL: https://issues.apache.org/jira/browse/LUCENE-3341
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spellchecker
>Affects Versions: 3.2
> Environment: Window XP, Java 6, JBoss 4.2.3GA
>Reporter: Devang Panchal
> Fix For: 3.2
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> *Problem:* SpellChecker is not checking spelling of a word less than 3 
> characters. i.e "en", "am", "an", etc. So these words are getting misspelled 
> in result.
> *Cause:*
> org.apache.lucene.search.spell.SpellChecker class is not adding in index 
> dictionary a word which has less than 3 characters. 
> The method indexDictionary() in SpellChecker class is ignoring all the 
> characters less than 3 characters length and not adding them in index 
> dictionary.
> *Example code:*
> SpellChecker luceneSpellChecker = null;
> luceneSpellChecker = new SpellChecker(new RAMDirectory(), new 
> NGramDistance());
> luceneSpellChecker.indexDictionary(
>   new PlainTextDictionary( new 
> InputStreamReader(dictionaryFile, "UTF-8")),
>   10, 500, false);
> System.out.println("Word 'an' exist? "+luceneSpellChecker.exist("an");
> System.out.println("Word 'am' exist? "+luceneSpellChecker.exist("am");



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Michael McCandless
Hello devs,

The infra team has notified us (Lucene/Solr) that in 26 days our
git-svn mirror will be turned off, because running it consumes too
many system resources, affecting other projects, apparently because of
a memory leak in git-svn.

Does anyone know of a link to this git-svn issue?  Is it a known
issue?  If there's something simple we can do (remove old jars from
our svn history, remove old branches), maybe we can sidestep the issue
and infra will allow it to keep running?

Or maybe someone in the Lucene/Solr dev community with prior
experience with git-svn could volunteer to play with it to see if
there's a viable solution, maybe with command-line options e.g. to
only mirror specific branches (trunk, 5.x)?

Or maybe it's time for us to switch to git, but there are problems
there too, e.g. we are currently missing large parts of our svn
history from the mirror now and it's not clear whether that would be
fixed if we switched:
https://issues.apache.org/jira/browse/INFRA-10828  Also, because we
used to add JAR files to svn, the "git clone" would likely take
several GBs unless we remove those JARs from our history.

Or if anyone has any other ideas, we should explore them, because
otherwise in 26 days there will be no more updates to the git mirror
of Lucene and Solr sources...

Thanks,

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6908) TestGeoUtils.testGeoRelations is buggy with irregular rectangles

2015-12-04 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042248#comment-15042248
 ] 

Michael McCandless commented on LUCENE-6908:


Those benchmark results are nice [~nknize]!  Is the source for this bench 
checked in somewhere?  From this, it seems like we should switch to Sinnot 
Haversine?  It's fastest and lowest error?

I beasted for a while and no failures!

+1 to commit!  Hopefully this means we can add {{DimensionalDistanceQuery}} and 
it just works!

> TestGeoUtils.testGeoRelations is buggy with irregular rectangles
> 
>
> Key: LUCENE-6908
> URL: https://issues.apache.org/jira/browse/LUCENE-6908
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nicholas Knize
> Attachments: LUCENE-6908.patch, LUCENE-6908.patch, LUCENE-6908.patch, 
> LUCENE-6908.patch
>
>
> The {{.testGeoRelations}} method doesn't exactly test the behavior of 
> GeoPoint*Query as its using the BKD split technique (instead of quad cell 
> division) to divide the space on each pass. For "large" distance queries this 
> can create a lot of irregular rectangles producing large radial distortion 
> error when using the cartesian approximation methods provided by 
> {{GeoUtils}}. This issue improves the accuracy of GeoUtils cartesian 
> approximation methods on irregular rectangles without having to cut over to 
> an expensive oblate geometry approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Dyer, James
I know Infra has tried a number of things to resolve this, to no avail.  But 
did we try "git-svn --revision=" to only mirror "post-LUCENE-3930" (ivy, 
r1307099)?  Or if that's not lean enough for the git-svn mirror to work, then 
cut off when 4.x was branched or whenever.  The hope would be to give git users 
enough of the past that it would be useful for new development but then also we 
can retain the status quo with svn (which is the best path for a 26-day 
timeframe).

James Dyer
Ingram Content Group


-Original Message-
From: Michael McCandless [mailto:luc...@mikemccandless.com] 
Sent: Friday, December 04, 2015 2:58 PM
To: Lucene/Solr dev
Cc: infrastruct...@apache.org
Subject: Lucene/Solr git mirror will soon turn off

Hello devs,

The infra team has notified us (Lucene/Solr) that in 26 days our
git-svn mirror will be turned off, because running it consumes too
many system resources, affecting other projects, apparently because of
a memory leak in git-svn.

Does anyone know of a link to this git-svn issue?  Is it a known
issue?  If there's something simple we can do (remove old jars from
our svn history, remove old branches), maybe we can sidestep the issue
and infra will allow it to keep running?

Or maybe someone in the Lucene/Solr dev community with prior
experience with git-svn could volunteer to play with it to see if
there's a viable solution, maybe with command-line options e.g. to
only mirror specific branches (trunk, 5.x)?

Or maybe it's time for us to switch to git, but there are problems
there too, e.g. we are currently missing large parts of our svn
history from the mirror now and it's not clear whether that would be
fixed if we switched:
https://issues.apache.org/jira/browse/INFRA-10828  Also, because we
used to add JAR files to svn, the "git clone" would likely take
several GBs unless we remove those JARs from our history.

Or if anyone has any other ideas, we should explore them, because
otherwise in 26 days there will be no more updates to the git mirror
of Lucene and Solr sources...

Thanks,

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8373) KerberosPlugin: Using multiple nodes on same machine leads clients to fetch TGT for every request

2015-12-04 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created SOLR-8373:
--

 Summary: KerberosPlugin: Using multiple nodes on same machine 
leads clients to fetch TGT for every request
 Key: SOLR-8373
 URL: https://issues.apache.org/jira/browse/SOLR-8373
 Project: Solr
  Issue Type: Bug
Reporter: Ishan Chattopadhyaya
Priority: Critical


Kerberized solr nodes accept negotiate/spnego/kerberos requests and processes 
them. It also passes back to the client a cookie called "hadoop.auth" (which is 
currently unused, but will eventually be used for delegation tokens). 

If two or more nodes are on the same machine, they all send out the cookie 
which have the same domain (hostname) and same path, but different cookie 
values.

Upon receipt at the client, if a cookie is rejected (which in this case will 
be), the client compulsorily gets a ​​*new*​​ TGT from the KDC instead of 
reading the same ticket from the ticketcache. This is causing the heavy traffic 
at the KDC, plus intermittent "Request is a replay" (which indicates race 
condition at KDC while handing out the TGT for the same principal).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6837) Add N-best output capability to JapaneseTokenizer

2015-12-04 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042258#comment-15042258
 ] 

Michael McCandless commented on LUCENE-6837:


[~cm] are you planning to backport this for 5.5?

> Add N-best output capability to JapaneseTokenizer
> -
>
> Key: LUCENE-6837
> URL: https://issues.apache.org/jira/browse/LUCENE-6837
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.3
>Reporter: KONNO, Hiroharu
>Assignee: Christian Moen
>Priority: Minor
> Attachments: LUCENE-6837.patch, LUCENE-6837.patch, LUCENE-6837.patch, 
> LUCENE-6837.patch, LUCENE-6837.patch
>
>
> Japanese morphological analyzers often generate mis-segmented tokens. N-best 
> output reduces the impact of mis-segmentation on search result. N-best output 
> is more meaningful than character N-gram, and it increases hit count too.
> If you use N-best output, you can get decompounded tokens (ex: 
> "シニアソフトウェアエンジニア" => {"シニア", "シニアソフトウェアエンジニア", "ソフトウェア", "エンジニア"}) and 
> overwrapped tokens (ex: "数学部長谷川" => {"数学", "部", "部長", "長谷川", "谷川"}), 
> depending on the dictionary and N-best parameter settings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Dawid Weiss
Oh, nevermind -- I think I know why:

License
GNU Library or Lesser General Public License version 2.0 (LGPLv2)

D.

On Fri, Dec 4, 2015 at 10:33 PM, Dawid Weiss  wrote:
> It'd be cool to actually reintegrate ancient CVS history as well (I
> think not all of it was moved to SVN).
>
> https://sourceforge.net/projects/lucene/
>
> D.
>
> On Fri, Dec 4, 2015 at 10:30 PM, Upayavira  wrote:
>> Even if we moved to git and did an svn rm on
>> https://svn.apache.org/repos/asf/lucene/dev, the entire history of Lucene
>> would remain in the ASF Subversion repository. Nothing we can do to prevent
>> that!!
>>
>> Upayavira
>>
>> On Fri, Dec 4, 2015, at 09:26 PM, Gus Heck wrote:
>>
>> If we moved to git would a read only svn for older versions still exist? If
>> so no reason to keep any jars at all in git.
>>
>> On Dec 4, 2015 4:22 PM, "Robert Muir"  wrote:
>>
>> On Fri, Dec 4, 2015 at 4:14 PM, Dawid Weiss  wrote:
 [...] several GBs unless we remove those JARs from our history.
>>>
>>> 1) History is important, don't dump it.
>>
>> I don't think jar files are 'history' and it was a mistake we had so
>> many in source control before we cleaned that up. it is much better
>> without them.
>>
>> this bloats the repository, makes clone slow for someone new who just
>> wants to check it out to work on it, etc.
>>
>> I wouldn't be surprised if it contributes to the system resources
>> issue at hand: which impacts *real history*
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Mark Miller
Many old builds will also have problems even with a git checkout. If you
actually wanted to try and build them it would be much more sane to work
from the SVN history I'd hope we can retain.

Mark

On Fri, Dec 4, 2015 at 4:55 PM Robert Muir  wrote:

> On Fri, Dec 4, 2015 at 4:25 PM, Dawid Weiss  wrote:
> >> I don't think jar files are 'history' and it was a mistake we had so
> >> many in source control before we cleaned that up. it is much better
> >> without them.
> >
> > Depends how you look at it. If your goal is to be able to actually
> > build ancient versions then dropping those JARs is going to be a real
> > pain. I think they should stay. Like I said, git is smart enough to
> > omit objects that aren't referenced from the cloned branch. The
> > conversion from SVN would have to be smart, but it's all doable.
>
> I mentioned this same issue the last thread where we discussed that, I
> do recommend to try to actually compile these old versions.
>
> As an experiment, I checked out the release tag for 4.2
> (http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_2_0)
> and ran 'ant compile'
>
> BUILD FAILED
> /home/rmuir/lucene_solr_4_2_0/build.xml:107: The following error
> occurred while executing this line:
> /home/rmuir/lucene_solr_4_2_0/lucene/common-build.xml:656: The
> following error occurred while executing this line:
> /home/rmuir/lucene_solr_4_2_0/lucene/common-build.xml:479: The
> following error occurred while executing this line:
> /home/rmuir/lucene_solr_4_2_0/lucene/common-build.xml:1578: Class not
> found: javac1.8
>
> That release was only 2 years ago, and its not the only problem you
> will hit. Besides build issues and stuff, I know at least Solr had a
> wildcard import, conflicting with the newly introduced
> java.util.Base64 that will prevent its compile. And I feel like there
> have been numerous sneaky generics issues that only Uwe seems to
> understand.
>
> Being able to build the old versions would require a good effort just
> to figure out what build tools / compiler versions you need to do it
> for the different timeframes, and git hashes aren't great if you want
> to document that or try to make some fancy bisection tool.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
- Mark
about.me/markrmiller


Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Upayavira
In the original report, the Infrastructure team said that throwing
memory at it did not solve the problem. And I believe they threw *a lot*
of memory at it.

There may well be other options - just needs someone to dive in and
look!

Upayavira

On Fri, Dec 4, 2015, at 11:10 PM, Alexandre Rafalovitch wrote:
> Maybe a silly question, but has anybody actually looked into the
> git-svn itself. E.g. talking to git-svn team with our example to help
> them troubleshoot the link. Or run a test sync under profiler.
> Also, it is running into OOM, but how big is a system doing the sync.
> If the issue is upgrading the server from 8gb of memory to 16gb, this
> might be an easier/cheaper course that moving the whole infrastructure
> around. I am sure Lucidworks or Elastic could probably sponsor a
> couple hundred bucks for memory upgrade if that turned out to be the
> real problem. :-)
> 
> Reading JIRA, I get a feeling that this problem with git-svn is mostly
> treated as a blackbox. It feels like there might be other options.
> 
> Regards,
>Alex.
> 
> On 4 December 2015 at 15:57, Michael McCandless
>  wrote:
> > The infra team has notified us (Lucene/Solr) that in 26 days our
> > git-svn mirror will be turned off, because running it consumes too
> > many system resources, affecting other projects, apparently because of
> > a memory leak in git-svn.
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_66) - Build # 5445 - Failure!

2015-12-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5445/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
[index.20151204232218770, index.20151204232219448, index.properties, 
replication.properties] expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: [index.20151204232218770, index.20151204232219448, 
index.properties, replication.properties] expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([505AB844FDB108A0:8BF1B882F8996113]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:820)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:787)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-8374) Issue with _text_ field in schema file

2015-12-04 Thread Romit Singhai (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romit Singhai updated SOLR-8374:

Summary: Issue with _text_ field in schema file  (was: Issue with _text_ 
type in schema file)

> Issue with _text_ field in schema file
> --
>
> Key: SOLR-8374
> URL: https://issues.apache.org/jira/browse/SOLR-8374
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration
>Affects Versions: 5.2.1
>Reporter: Romit Singhai
>Priority: Critical
>  Labels: patch
>
> In the data_driven_schema_configs, the warning say that _text_ field  can be 
> removed if not needed. The  hadoop indexer fails  to index data  as ping 
> command could not find the collection required for indexing.
> The ping command for collection needs to be fixed (making _text_ optional) as 
> _text_ add significantly to index size even if not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Upayavira
As I said earlier - our history is inside the ASF SVN repo. The only way
our history would be lost would be if the whole repo was deleted, which
I suspect won't happen for a while. So even if we imported a snapshot
over to Git, our full SVN history is immutably stored in SVN (even if we
did svn rm on the whole tree).

Upayavira


On Fri, Dec 4, 2015, at 10:16 PM, Mark Miller wrote:
> Many old builds will also have problems even with a git checkout. If
> you actually wanted to try and build them it would be much more sane
> to work from the SVN history I'd hope we can retain.
>
> Mark
>
> On Fri, Dec 4, 2015 at 4:55 PM Robert Muir  wrote:
>> On Fri, Dec 4, 2015 at 4:25 PM, Dawid Weiss
>>  wrote:
>>
>> I don't think jar files are 'history' and it was a mistake we had so
>>
>> many in source control before we cleaned that up. it is much better
>>
>> without them.
>>
>
>>
> Depends how you look at it. If your goal is to be able to actually
>>
> build ancient versions then dropping those JARs is going to be a real
>>
> pain. I think they should stay. Like I said, git is smart enough to
>>
> omit objects that aren't referenced from the cloned branch. The
>>
> conversion from SVN would have to be smart, but it's all doable.
>>
>>
I mentioned this same issue the last thread where we discussed that, I
>>
do recommend to try to actually compile these old versions.
>>
>>
As an experiment, I checked out the release tag for 4.2
>>
(http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_2_0)
>>
and ran 'ant compile'
>>
>>
BUILD FAILED
>>
/home/rmuir/lucene_solr_4_2_0/build.xml:107: The following error
>>
occurred while executing this line:
>>
/home/rmuir/lucene_solr_4_2_0/lucene/common-build.xml:656: The
>>
following error occurred while executing this line:
>>
/home/rmuir/lucene_solr_4_2_0/lucene/common-build.xml:479: The
>>
following error occurred while executing this line:
>>
/home/rmuir/lucene_solr_4_2_0/lucene/common-build.xml:1578: Class not
>>
found: javac1.8
>>
>>
That release was only 2 years ago, and its not the only problem you
>>
will hit. Besides build issues and stuff, I know at least Solr had a
>>
wildcard import, conflicting with the newly introduced
>>
java.util.Base64 that will prevent its compile. And I feel like there
>>
have been numerous sneaky generics issues that only Uwe seems to
>>
understand.
>>
>>
Being able to build the old versions would require a good effort just
>>
to figure out what build tools / compiler versions you need to do it
>>
for the different timeframes, and git hashes aren't great if you want
>>
to document that or try to make some fancy bisection tool.
>>
>>
-
>>
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>
For additional commands, e-mail: dev-h...@lucene.apache.org
>>
> --
> - Mark about.me/markrmiller


[jira] [Commented] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2015-12-04 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042220#comment-15042220
 ] 

David Smiley commented on SOLR-7495:


Ok; you could use two fields then, one for sorting purposes, one for grouping 
purposes.  This is a typical pattern -- indexing a field different ways for 
different purposes.

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
> Attachments: SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> 

Re: A 5.4 release?

2015-12-04 Thread Upayavira


On Fri, Dec 4, 2015, at 08:59 PM, Michael McCandless wrote:
> On Fri, Dec 4, 2015 at 2:29 PM, Upayavira  wrote:
> >
> > On Fri, Dec 4, 2015, at 06:10 PM, Michael McCandless wrote:
> >>
> >> On Fri, Dec 4, 2015 at 11:34 AM, Upayavira  wrote:
> >>
> >> > As a first time Release Manager
> >>
> >> Thanks Upayavira!
> >>
> >> But please, please, please take advantage of your newness to this, to
> >> edit https://wiki.apache.org/lucene-java/ReleaseTodo when things are
> >> confusing/missing/etc.!
> >>
> >> Being new to something is unfortunately rare and we all quickly become
> >> "release blind" after doing a couple releases.
> >
> > Okay - will do.
> >
> > 1. Make sure your key is up on the pgp.mit.edu before running the build.
> > 2. Make sure you are using ant 1.8, not ant 1.9.
> > 3. To be found out shortly... :-)
> 
> Thanks Upayavira: your newness is paying off already!  Just be sure to
> edit the wiki accordingly :)

I certainly will. I'll add some context setting at the top too, as it
does dive into the weeds pretty much from the get go.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6920) Simplify callable function checks in Expression module

2015-12-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042522#comment-15042522
 ] 

Uwe Schindler commented on LUCENE-6920:
---

Thanks for the hint. There is another getClassLoader and 
getClassLoader.getParent() currently in the SPI classloader. I will check this 
out tomorrow and open a separate issue to protect this with doPrivileged(). But 
this one is uncritical, as it only tries to get classloader for cases where the 
context class loaders differs

I may also add a test using LTC#runWithRestrictedPermissions() to ensure that 
this really needs no privileges.

> Simplify callable function checks in Expression module
> --
>
> Key: LUCENE-6920
> URL: https://issues.apache.org/jira/browse/LUCENE-6920
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/expressions
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Trunk, 5.5
>
> Attachments: LUCENE-6920.patch
>
>
> The expressions module allows to specify custom functions. It does some 
> checks to ensure that the compiled Expression works correctly and does not 
> produce linkage errors. It also checks parameters and return type to  be 
> doubles.
> There are two problems with the current approach:
> - the check gets classloaders of the method's declaring class. This fails if 
> a security manager forbids access to bootstrap classes (e.g., java.lang.Math)
> - the code only checks if method or declaring class are public, but not if it 
> is really reachable. This may not be the case in Java 9 (different module 
> without exports,...)
> This issue will use MethodHandles to do the accessibility checks (it uses 
> MethodHandles.publicLookup() to resolve the given reflected method). If that 
> fails, our compiled code cannot acess it. If module system prevents access, 
> this is also checked.
> To fix the issue with classloaders, it uses a trick: It calls Class.forName() 
> with the classloader we use to compile our expression. If that does not 
> return the same class as the declared method, it also fails compilation. This 
> prevents NoClassDefFoundException on executing the expression.
> All tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8374) Issue with _text_ field in schema file

2015-12-04 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042452#comment-15042452
 ] 

Erick Erickson commented on SOLR-8374:
--

How is the ping request generated? This is probably the result of "df" field in 
the configs that's referenced _only_ when there is no field-qualifier in the 
incoming query. So I believe that if whoever generates the ping query issues a 
query, say id:* rather than just a bare term this isn't a problem.

Pulling _text_ out of the df would affect all sorts of other behavior so I'm 
not sure we want to do that. Perhaps define an explicit ping request handler 
instead?

> Issue with _text_ field in schema file
> --
>
> Key: SOLR-8374
> URL: https://issues.apache.org/jira/browse/SOLR-8374
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration
>Affects Versions: 5.2.1
>Reporter: Romit Singhai
>Priority: Critical
>  Labels: patch
>
> In the data_driven_schema_configs, the warning say that _text_ field  can be 
> removed if not needed. The  hadoop indexer fails  to index data  as ping 
> command could not find the collection required for indexing.
> The ping command for collection needs to be fixed (making _text_ optional) as 
> _text_ add significantly to index size even if not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6920) Simplify callable function checks in Expression module

2015-12-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6920:
--
Attachment: LUCENE-6920.patch

Patch. All tests pass.

> Simplify callable function checks in Expression module
> --
>
> Key: LUCENE-6920
> URL: https://issues.apache.org/jira/browse/LUCENE-6920
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/expressions
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Trunk, 5.5
>
> Attachments: LUCENE-6920.patch
>
>
> The expressions module allows to specify custom functions. It does some 
> checks to ensure that the compiled Expression works correctly and does not 
> produce linkage errors. It also checks parameters and return type to  be 
> doubles.
> There are two problems with the current approach:
> - the check gets classloaders of the method's declaring class. This fails if 
> a security manager forbids access to bootstrap classes (e.g., java.lang.Math)
> - the code only checks if method or declaring class are public, but not if it 
> is really reachable. This may not be the case in Java 9 (different module 
> without exports,...)
> This issue will use MethodHandles to do the accessibility checks (it uses 
> MethodHandles.publicLookup() to resolve the given reflected method). If that 
> fails, our compiled code cannot acess it. If module system prevents access, 
> this is also checked.
> To fix the issue with classloaders, it uses a trick: It calls Class.forName() 
> with the classloader we use to compile our expression. If that does not 
> return the same class as the declared method, it also fails compilation. This 
> prevents NoClassDefFoundException on executing the expression.
> All tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6920) Simplify callable function checks in Expression module

2015-12-04 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-6920:
-

 Summary: Simplify callable function checks in Expression module
 Key: LUCENE-6920
 URL: https://issues.apache.org/jira/browse/LUCENE-6920
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/expressions
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: Trunk, 5.5


The expressions module allows to specify custom functions. It does some checks 
to ensure that the compiled Expression works correctly and does not produce 
linkage errors. It also checks parameters and return type to  be doubles.

There are two problems with the current approach:
- the check gets classloaders of the method's declaring class. This fails if a 
security manager forbids access to bootstrap classes (e.g., java.lang.Math)
- the code only checks if method or declaring class are public, but not if it 
is really reachable. This may not be the case in Java 9 (different module 
without exports,...)

This issue will use MethodHandles to do the accessibility checks (it uses 
MethodHandles.publicLookup() to resolve the given reflected method). If that 
fails, our compiled code cannot acess it. If module system prevents access, 
this is also checked.

To fix the issue with classloaders, it uses a trick: It calls Class.forName() 
with the classloader we use to compile our expression. If that does not return 
the same class as the declared method, it also fails compilation. This prevents 
NoClassDefFoundException on executing the expression.

All tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6920) Simplify callable function checks in Expression module

2015-12-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6920:
--
Attachment: LUCENE-6920.patch

New patch with permission removed. Solr never had this permission.

> Simplify callable function checks in Expression module
> --
>
> Key: LUCENE-6920
> URL: https://issues.apache.org/jira/browse/LUCENE-6920
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/expressions
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Trunk, 5.5
>
> Attachments: LUCENE-6920.patch, LUCENE-6920.patch
>
>
> The expressions module allows to specify custom functions. It does some 
> checks to ensure that the compiled Expression works correctly and does not 
> produce linkage errors. It also checks parameters and return type to  be 
> doubles.
> There are two problems with the current approach:
> - the check gets classloaders of the method's declaring class. This fails if 
> a security manager forbids access to bootstrap classes (e.g., java.lang.Math)
> - the code only checks if method or declaring class are public, but not if it 
> is really reachable. This may not be the case in Java 9 (different module 
> without exports,...)
> This issue will use MethodHandles to do the accessibility checks (it uses 
> MethodHandles.publicLookup() to resolve the given reflected method). If that 
> fails, our compiled code cannot acess it. If module system prevents access, 
> this is also checked.
> To fix the issue with classloaders, it uses a trick: It calls Class.forName() 
> with the classloader we use to compile our expression. If that does not 
> return the same class as the declared method, it also fails compilation. This 
> prevents NoClassDefFoundException on executing the expression.
> All tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6920) Simplify callable function checks in Expression module

2015-12-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042528#comment-15042528
 ] 

Uwe Schindler edited comment on LUCENE-6920 at 12/5/15 1:14 AM:


New patch with permission removed. Solr never had this permission.
When backporting I will for sure also check Java 7, but I don't think there are 
problems.


was (Author: thetaphi):
New patch with permission removed. Solr never had this permission.

> Simplify callable function checks in Expression module
> --
>
> Key: LUCENE-6920
> URL: https://issues.apache.org/jira/browse/LUCENE-6920
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/expressions
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Trunk, 5.5
>
> Attachments: LUCENE-6920.patch, LUCENE-6920.patch
>
>
> The expressions module allows to specify custom functions. It does some 
> checks to ensure that the compiled Expression works correctly and does not 
> produce linkage errors. It also checks parameters and return type to  be 
> doubles.
> There are two problems with the current approach:
> - the check gets classloaders of the method's declaring class. This fails if 
> a security manager forbids access to bootstrap classes (e.g., java.lang.Math)
> - the code only checks if method or declaring class are public, but not if it 
> is really reachable. This may not be the case in Java 9 (different module 
> without exports,...)
> This issue will use MethodHandles to do the accessibility checks (it uses 
> MethodHandles.publicLookup() to resolve the given reflected method). If that 
> fails, our compiled code cannot acess it. If module system prevents access, 
> this is also checked.
> To fix the issue with classloaders, it uses a trick: It calls Class.forName() 
> with the classloader we use to compile our expression. If that does not 
> return the same class as the declared method, it also fails compilation. This 
> prevents NoClassDefFoundException on executing the expression.
> All tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Uwe Schindler
Hi,

This looks like a good idea to me. Maybe we just have a limited amount of 
history and branches in Git/Github, so people can work and create pull 
requests. Nobody wants to create pull request on a very old branch or against a 
revision years ago.

Maybe Infra can mirror only the last 2 years of trunk and branch_5x?

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Dyer, James [mailto:james.d...@ingramcontent.com]
> Sent: Friday, December 04, 2015 10:48 PM
> To: dev@lucene.apache.org
> Cc: infrastruct...@apache.org
> Subject: RE: Lucene/Solr git mirror will soon turn off
> 
> I know Infra has tried a number of things to resolve this, to no avail.  But 
> did
> we try "git-svn --revision=" to only mirror "post-LUCENE-3930" (ivy,
> r1307099)?  Or if that's not lean enough for the git-svn mirror to work, then
> cut off when 4.x was branched or whenever.  The hope would be to give git
> users enough of the past that it would be useful for new development but
> then also we can retain the status quo with svn (which is the best path for a
> 26-day timeframe).
> 
> James Dyer
> Ingram Content Group
> 
> 
> -Original Message-
> From: Michael McCandless [mailto:luc...@mikemccandless.com]
> Sent: Friday, December 04, 2015 2:58 PM
> To: Lucene/Solr dev
> Cc: infrastruct...@apache.org
> Subject: Lucene/Solr git mirror will soon turn off
> 
> Hello devs,
> 
> The infra team has notified us (Lucene/Solr) that in 26 days our
> git-svn mirror will be turned off, because running it consumes too
> many system resources, affecting other projects, apparently because of
> a memory leak in git-svn.
> 
> Does anyone know of a link to this git-svn issue?  Is it a known
> issue?  If there's something simple we can do (remove old jars from
> our svn history, remove old branches), maybe we can sidestep the issue
> and infra will allow it to keep running?
> 
> Or maybe someone in the Lucene/Solr dev community with prior
> experience with git-svn could volunteer to play with it to see if
> there's a viable solution, maybe with command-line options e.g. to
> only mirror specific branches (trunk, 5.x)?
> 
> Or maybe it's time for us to switch to git, but there are problems
> there too, e.g. we are currently missing large parts of our svn
> history from the mirror now and it's not clear whether that would be
> fixed if we switched:
> https://issues.apache.org/jira/browse/INFRA-10828  Also, because we
> used to add JAR files to svn, the "git clone" would likely take
> several GBs unless we remove those JARs from our history.
> 
> Or if anyone has any other ideas, we should explore them, because
> otherwise in 26 days there will be no more updates to the git mirror
> of Lucene and Solr sources...
> 
> Thanks,
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread Doug Turnbull
The only downside is GitHub is a convenient way to run blame, etc. It's
very convenient for sleuthing through code. (If only their search wasn't
abysmal in terms of relevancy, but I digress)

Is the more systemic problem large binaries checked in I'm the past? Can we
do any surgery to svn or git to remove these? IIRC this is one reason
avoiding changing from git to svn to begin with. If removing some jars from
an old version of Lucene fixes it, perhaps this is a better long term
solution. I suppose the issue is having someone with the right svn/git
skills and the time to pull this off.

Doug

On Friday, December 4, 2015, Uwe Schindler  wrote:

> Hi,
>
> This looks like a good idea to me. Maybe we just have a limited amount of
> history and branches in Git/Github, so people can work and create pull
> requests. Nobody wants to create pull request on a very old branch or
> against a revision years ago.
>
> Maybe Infra can mirror only the last 2 years of trunk and branch_5x?
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de 
>
> > -Original Message-
> > From: Dyer, James [mailto:james.d...@ingramcontent.com ]
> > Sent: Friday, December 04, 2015 10:48 PM
> > To: dev@lucene.apache.org 
> > Cc: infrastruct...@apache.org 
> > Subject: RE: Lucene/Solr git mirror will soon turn off
> >
> > I know Infra has tried a number of things to resolve this, to no avail.
> But did
> > we try "git-svn --revision=" to only mirror "post-LUCENE-3930" (ivy,
> > r1307099)?  Or if that's not lean enough for the git-svn mirror to work,
> then
> > cut off when 4.x was branched or whenever.  The hope would be to give git
> > users enough of the past that it would be useful for new development but
> > then also we can retain the status quo with svn (which is the best path
> for a
> > 26-day timeframe).
> >
> > James Dyer
> > Ingram Content Group
> >
> >
> > -Original Message-
> > From: Michael McCandless [mailto:luc...@mikemccandless.com
> ]
> > Sent: Friday, December 04, 2015 2:58 PM
> > To: Lucene/Solr dev
> > Cc: infrastruct...@apache.org 
> > Subject: Lucene/Solr git mirror will soon turn off
> >
> > Hello devs,
> >
> > The infra team has notified us (Lucene/Solr) that in 26 days our
> > git-svn mirror will be turned off, because running it consumes too
> > many system resources, affecting other projects, apparently because of
> > a memory leak in git-svn.
> >
> > Does anyone know of a link to this git-svn issue?  Is it a known
> > issue?  If there's something simple we can do (remove old jars from
> > our svn history, remove old branches), maybe we can sidestep the issue
> > and infra will allow it to keep running?
> >
> > Or maybe someone in the Lucene/Solr dev community with prior
> > experience with git-svn could volunteer to play with it to see if
> > there's a viable solution, maybe with command-line options e.g. to
> > only mirror specific branches (trunk, 5.x)?
> >
> > Or maybe it's time for us to switch to git, but there are problems
> > there too, e.g. we are currently missing large parts of our svn
> > history from the mirror now and it's not clear whether that would be
> > fixed if we switched:
> > https://issues.apache.org/jira/browse/INFRA-10828  Also, because we
> > used to add JAR files to svn, the "git clone" would likely take
> > several GBs unless we remove those JARs from our history.
> >
> > Or if anyone has any other ideas, we should explore them, because
> > otherwise in 26 days there will be no more updates to the git mirror
> > of Lucene and Solr sources...
> >
> > Thanks,
> >
> > Mike McCandless
> >
> > http://blog.mikemccandless.com
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> 
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> For additional commands, e-mail: dev-h...@lucene.apache.org 
>
>

-- 
*Doug Turnbull **| *Search Relevance Consultant | OpenSource Connections
, LLC | 240.476.9983
Author: Relevant Search 
This e-mail and all contents, including attachments, is considered to be
Company Confidential unless explicitly stated otherwise, regardless
of whether attachments are marked as such.


[jira] [Commented] (LUCENE-6920) Simplify callable function checks in Expression module

2015-12-04 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042515#comment-15042515
 ] 

Robert Muir commented on LUCENE-6920:
-

+1, looks good. 

We may also try removing the hack for it in tests.policy...

{code}
  // expressions TestCustomFunctions (only on older java8?)
  permission java.lang.RuntimePermission "getClassLoader";
{code}

The reason for the comment there: the last time i looked at this, there was 
some differences with java versions that confused me. So if we are worried 
about that, we could also just fix it for trunk only and avoid any java 7 
problems.

> Simplify callable function checks in Expression module
> --
>
> Key: LUCENE-6920
> URL: https://issues.apache.org/jira/browse/LUCENE-6920
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/expressions
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: Trunk, 5.5
>
> Attachments: LUCENE-6920.patch
>
>
> The expressions module allows to specify custom functions. It does some 
> checks to ensure that the compiled Expression works correctly and does not 
> produce linkage errors. It also checks parameters and return type to  be 
> doubles.
> There are two problems with the current approach:
> - the check gets classloaders of the method's declaring class. This fails if 
> a security manager forbids access to bootstrap classes (e.g., java.lang.Math)
> - the code only checks if method or declaring class are public, but not if it 
> is really reachable. This may not be the case in Java 9 (different module 
> without exports,...)
> This issue will use MethodHandles to do the accessibility checks (it uses 
> MethodHandles.publicLookup() to resolve the given reflected method). If that 
> fails, our compiled code cannot acess it. If module system prevents access, 
> this is also checked.
> To fix the issue with classloaders, it uses a trick: It calls Class.forName() 
> with the classloader we use to compile our expression. If that does not 
> return the same class as the declared method, it also fails compilation. This 
> prevents NoClassDefFoundException on executing the expression.
> All tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8375) ReplicaAssigner rejects valid positions

2015-12-04 Thread Kelvin Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kelvin Tan updated SOLR-8375:
-
Attachment: patch.txt

Patch attached against trunk.

> ReplicaAssigner rejects valid positions
> ---
>
> Key: SOLR-8375
> URL: https://issues.apache.org/jira/browse/SOLR-8375
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Kelvin Tan
>Priority: Minor
> Attachments: patch.txt
>
>
> ReplicaAssigner rejects any position for which a rule does not return 
> NODE_CAN_BE_ASSIGNED.
> However, if the rule's shard does not apply to the position's shard, Rule 
> returns NOT_APPLICABLE. This is not taken into account, and thus valid rules 
> are being rejected at the moment. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8375) ReplicaAssigner rejects valid positions

2015-12-04 Thread Kelvin Tan (JIRA)
Kelvin Tan created SOLR-8375:


 Summary: ReplicaAssigner rejects valid positions
 Key: SOLR-8375
 URL: https://issues.apache.org/jira/browse/SOLR-8375
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.3
Reporter: Kelvin Tan
Priority: Minor


ReplicaAssigner rejects any position for which a rule does not return 
NODE_CAN_BE_ASSIGNED.

However, if the rule's shard does not apply to the position's shard, Rule 
returns NOT_APPLICABLE. This is not taken into account, and thus valid rules 
are being rejected at the moment. 






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 15115 - Failure!

2015-12-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15115/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
[/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_1DF73E1857448FE6-001/solr-instance-011/./collection1/data,
 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_1DF73E1857448FE6-001/solr-instance-011/./collection1/data/index.20151205052102699,
 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_1DF73E1857448FE6-001/solr-instance-011/./collection1/data/index.20151205052102827]
 expected:<2> but was:<3>

Stack Trace:
java.lang.AssertionError: 
[/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_1DF73E1857448FE6-001/solr-instance-011/./collection1/data,
 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_1DF73E1857448FE6-001/solr-instance-011/./collection1/data/index.20151205052102699,
 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_1DF73E1857448FE6-001/solr-instance-011/./collection1/data/index.20151205052102827]
 expected:<2> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([1DF73E1857448FE6:EA84D04091AC2000]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:815)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1245)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)

Re: A 5.4 release?

2015-12-04 Thread Anshum Gupta
This is good ! :)

just so that you know, make sure that your key is up on id.apache.org too!
I struggled with that for almost a day during my first release (I believe I
added that to the wiki too).

On Sat, Dec 5, 2015 at 12:59 AM, Upayavira  wrote:

>
>
> On Fri, Dec 4, 2015, at 06:10 PM, Michael McCandless wrote:
> > On Fri, Dec 4, 2015 at 11:34 AM, Upayavira  wrote:
> >
> > > As a first time Release Manager
> >
> > Thanks Upayavira!
> >
> > But please, please, please take advantage of your newness to this, to
> > edit https://wiki.apache.org/lucene-java/ReleaseTodo when things are
> > confusing/missing/etc.!
> >
> > Being new to something is unfortunately rare and we all quickly become
> > "release blind" after doing a couple releases.
>
> Okay - will do.
>
> 1. Make sure your key is up on the pgp.mit.edu before running the build.
> 2. Make sure you are using ant 1.8, not ant 1.9.
> 3. To be found out shortly... :-)
>
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Anshum Gupta


[jira] [Created] (SOLR-8376) Jetty 9.2.14.v20151106 upgrade from 9.2.11

2015-12-04 Thread Bill Bell (JIRA)
Bill Bell created SOLR-8376:
---

 Summary: Jetty 9.2.14.v20151106 upgrade from 9.2.11
 Key: SOLR-8376
 URL: https://issues.apache.org/jira/browse/SOLR-8376
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.3.1
Reporter: Bill Bell
 Fix For: 5.4


Let's upgrade SOLR 5.3.1 to Jetty 9.2.14.v20151106 which was released recently.

Release notes:

jetty-9.2.14.v20151106 - 06 November 2015
 + 428474 Expose batch mode in the Jetty WebSocket API
 + 471055 Restore legacy/experimental WebSocket extensions (deflate-frame)
 + 472082 isOpen returns true on CLOSING Connection
 + 474068 Update WebSocket Extension for permessage-deflate draft-22
 + 474319 Reintroduce blocking connect().
 + 474321 Allow synchronous address resolution.
 + 474453 Tiny buffers (under 7 bytes) fail to compress in permessage-deflate
 + 474454 Backport permessage-deflate from Jetty 9.3.x to 9.2.x
 + 474936 WebSocketSessions are not always cleaned out from openSessions
 + 476023 Incorrect trimming of WebSocket close reason
 + 476049 When using WebSocket Session.close() there should be no status code
   or reason sent
 + 477385 Problem in MANIFEST.MF with version 9.2.10 / 9.2.13.
 + 477817 Fixed memory leak in QueuedThreadPool
 + 481006 SSL requests intermittently fail with EOFException when SSL
   renegotiation is disallowed.
 + 481236 Make ShutdownMonitor java security manager friendly
 + 481437 Port ConnectHandler connect and context functionality from Jetty 8.

jetty-9.2.13.v20150730 - 30 July 2015
 + 472859 ConcatServlet may expose protected resources.
 + 473006 Encode addPath in URLResource
 + 473243 Delay resource close for async default content
 + 473266 Better handling of MultiException
 + 473322 GatherWrite limit handling
 + 473624 ProxyServlet.Transparent / TransparentDelegate add trailing slash
   before query when using prefix.
 + 473832 SslConnection flips back buffers on handshake exception

jetty-9.2.12.v20150709 - 09 July 2015
 + 469414 Proxied redirects expose upstream server name.
 + 469936 Remove usages of SpinLock.
 + 470184 Send the proxy-to-server request more lazily.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8359) Restrict child classes from using parent logger's state

2015-12-04 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042647#comment-15042647
 ] 

Mike Drob commented on SOLR-8359:
-

+1 LGTM

> Restrict child classes from using parent logger's state
> ---
>
> Key: SOLR-8359
> URL: https://issues.apache.org/jira/browse/SOLR-8359
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Mike Drob
> Fix For: Trunk
>
> Attachments: SOLR-8359-nonfinal-values.patch, SOLR-8359.patch, 
> SOLR-8359.patch
>
>
> In SOLR-8330 we split up a lot of loggers. However, there are a few classes 
> that still use their parent's logging state and configuration indirectly.
> {{HdfsUpdateLog}} and {{HdfsTransactionLog}} both use their parent class 
> cached read of {{boolean debug = log.isDebugEnabled()}}, when they should 
> check their own loggers instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8260) Use NIO2 APIs in core discovery

2015-12-04 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042651#comment-15042651
 ] 

David Smiley commented on SOLR-8260:


bq. will update.

Looking forward to that still ;-)

> Use NIO2 APIs in core discovery
> ---
>
> Key: SOLR-8260
> URL: https://issues.apache.org/jira/browse/SOLR-8260
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 5.4
>
> Attachments: SOLR-8260.patch
>
>
> CorePropertiesLocator currently does all its file system interaction using 
> java.io.File and friends, which have all sorts of drawbacks with regard to 
> error handling and reporting.  We've been on java 7 for a while now, so we 
> should use the nio2 Path APIs instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8377) Unnecessary for loop in ReplicaAssigner.tryAllPermutations()

2015-12-04 Thread Kelvin Tan (JIRA)
Kelvin Tan created SOLR-8377:


 Summary: Unnecessary for loop in 
ReplicaAssigner.tryAllPermutations()
 Key: SOLR-8377
 URL: https://issues.apache.org/jira/browse/SOLR-8377
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.3
Reporter: Kelvin Tan
Priority: Minor


Unused for loop in ReplicaAssigner.tryAllPermutations().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8377) Unnecessary for loop in ReplicaAssigner.tryAllPermutations()

2015-12-04 Thread Kelvin Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kelvin Tan updated SOLR-8377:
-
Attachment: 1.patch

Patch removing the loop in question.

> Unnecessary for loop in ReplicaAssigner.tryAllPermutations()
> 
>
> Key: SOLR-8377
> URL: https://issues.apache.org/jira/browse/SOLR-8377
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Kelvin Tan
>Priority: Minor
> Attachments: 1.patch
>
>
> Unused for loop in ReplicaAssigner.tryAllPermutations().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8359) Restrict child classes from using parent logger's state

2015-12-04 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042671#comment-15042671
 ] 

Anshum Gupta commented on SOLR-8359:


Thanks Jason and Mike. I'll take a look at this.

> Restrict child classes from using parent logger's state
> ---
>
> Key: SOLR-8359
> URL: https://issues.apache.org/jira/browse/SOLR-8359
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Mike Drob
> Fix For: Trunk
>
> Attachments: SOLR-8359-nonfinal-values.patch, SOLR-8359.patch, 
> SOLR-8359.patch
>
>
> In SOLR-8330 we split up a lot of loggers. However, there are a few classes 
> that still use their parent's logging state and configuration indirectly.
> {{HdfsUpdateLog}} and {{HdfsTransactionLog}} both use their parent class 
> cached read of {{boolean debug = log.isDebugEnabled()}}, when they should 
> check their own loggers instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6919) Change the Scorer API to expose an iterator instead of extending DocIdSetIterator

2015-12-04 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042672#comment-15042672
 ] 

David Smiley commented on LUCENE-6919:
--

+1 because it seems to perform better.  And I could see it simplifying some 
scorer implementations that no longer need to delegate various methods to a 
DISI, like ValueSourceScorer.

I think it'll be interesting to re-assess after you post the real patch -- to 
see wether it made any other code more painful.  I suppose both the proposed 
Scorer.iterator() and also TwoPhaseIterator.approximation() both return a live 
stateful reference; i.e. it's positioned and not from the beginning.  The docs 
for both methods should state that to make it clear.  And I suspect it may be 
useful to define a convenience method of Scorer.docID() as iterator().docID() 
since I *think* it's called a lot; but I may be wrong on that.  Your call.

> Change the Scorer API to expose an iterator instead of extending 
> DocIdSetIterator
> -
>
> Key: LUCENE-6919
> URL: https://issues.apache.org/jira/browse/LUCENE-6919
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6919.patch
>
>
> I was working on trying to address the performance regression on LUCENE-6815 
> but this is hard to do without introducing specialization of 
> DisjunctionScorer which I'd like to avoid at all costs.
> I think the performance regression would be easy to address without 
> specialization if Scorers were changed to return an iterator instead of 
> extending DocIdSetIterator. So conceptually the API would move from
> {code}
> class Scorer extends DocIdSetIterator {
> }
> {code}
> to
> {code}
> class Scorer {
>   DocIdSetIterator iterator();
> }
> {code}
> This would help me because then if none of the sub clauses support two-phase 
> iteration, DisjunctionScorer could directly return the approximation as an 
> iterator instead of having to check if twoPhase == null at every iteration.
> Such an approach could also help remove some method calls. For instance 
> TermScorer.nextDoc calls PostingsEnum.nextDoc but with this change 
> TermScorer.iterator() could return the PostingsEnum and TermScorer would not 
> even appear in stack traces when scoring. I hacked a patch to see how much 
> that would help and luceneutil seems to like the change:
> {noformat}
> TaskQPS baseline  StdDev   QPS patch  StdDev  
>   Pct diff
>   Fuzzy1   88.54 (15.7%)   86.73 (16.6%)   
> -2.0% ( -29% -   35%)
>   AndHighLow  698.98  (4.1%)  691.11  (5.1%)   
> -1.1% (  -9% -8%)
>   Fuzzy2   26.47 (11.2%)   26.28 (10.3%)   
> -0.7% ( -19% -   23%)
>  MedSpanNear  141.03  (3.3%)  140.51  (3.2%)   
> -0.4% (  -6% -6%)
>   HighPhrase   60.66  (2.6%)   60.48  (3.3%)   
> -0.3% (  -5% -5%)
>  LowSpanNear   29.25  (2.4%)   29.21  (2.1%)   
> -0.1% (  -4% -4%)
>MedPhrase   28.32  (1.9%)   28.28  (2.0%)   
> -0.1% (  -3% -3%)
>LowPhrase   17.31  (2.1%)   17.29  (2.6%)   
> -0.1% (  -4% -4%)
> HighSloppyPhrase   10.93  (6.0%)   10.92  (6.0%)   
> -0.1% ( -11% -   12%)
>  MedSloppyPhrase   72.21  (2.2%)   72.27  (1.8%)
> 0.1% (  -3% -4%)
>  Respell   57.35  (3.2%)   57.41  (3.4%)
> 0.1% (  -6% -6%)
> HighSpanNear   26.71  (3.0%)   26.75  (2.5%)
> 0.1% (  -5% -5%)
> OrNotHighLow  803.46  (3.4%)  807.03  (4.2%)
> 0.4% (  -6% -8%)
>  LowSloppyPhrase   88.02  (3.4%)   88.77  (2.5%)
> 0.8% (  -4% -7%)
> OrNotHighMed  200.45  (2.7%)  203.83  (2.5%)
> 1.7% (  -3% -7%)
>   OrHighHigh   38.98  (7.9%)   40.30  (6.6%)
> 3.4% ( -10% -   19%)
> HighTerm   92.53  (5.3%)   95.94  (5.8%)
> 3.7% (  -7% -   15%)
>OrHighMed   53.80  (7.7%)   55.79  (6.6%)
> 3.7% (  -9% -   19%)
>   AndHighMed  266.69  (1.7%)  277.15  (2.5%)
> 3.9% (   0% -8%)
>  Prefix3   44.68  (5.4%)   46.60  (7.0%)
> 4.3% (  -7% -   17%)
>  MedTerm  261.52  (4.9%)  273.52  (5.4%)
> 4.6% (  -5% -   15%)
> Wildcard   42.39  (6.1%)   44.35  (7.8%)
> 4.6% (  -8% -   19%)
> 

Re: Lucene/Solr git mirror will soon turn off

2015-12-04 Thread david.w.smi...@gmail.com
I agree with Rob on this — delete the ‘jar’s from git history, for all the
reasons Rob said.  If someone wants to attempt to actually *build* an old
release, and thus needs the jars, then they are welcome to use ASF SVN
archives for that purpose instead, and even then apparently it will be a
challenge based on what I’ve read today.

Any way, maybe this will or maybe this won’t even solve the git-svn OOM
problem by itself?  It’s worth a shot to find out as a trial run; no?
Maybe we could ask infra to try as an experiment.  If it doesn’t solve the
problem then we needn’t belabor this decision at this time — it can be
resumed at a future git transitional discussion, which is not the subject
matter of the current crisis.

bq. I know you won't accept rational arguments. :)

Dawid, please, lets not provoke each other with that kind of talk.  The
smiley face doesn’t make it okay.

~ David

On Fri, Dec 4, 2015 at 4:26 PM Dawid Weiss  wrote:

> > I don't think jar files are 'history' and it was a mistake we had so
> > many in source control before we cleaned that up. it is much better
> > without them.
>
> Depends how you look at it. If your goal is to be able to actually
> build ancient versions then dropping those JARs is going to be a real
> pain. I think they should stay. Like I said, git is smart enough to
> omit objects that aren't referenced from the cloned branch. The
> conversion from SVN would have to be smart, but it's all doable.
>
> > this bloats the repository, makes clone slow for someone new who just
> > wants to check it out to work on it, etc.
>
> No, not really. There is a dozen ways to do it without cloning the
> full repo (provide a patch with --depth 1, clone a selective branch,
> etc.). We've had that discussion before. I know you won't accept
> rational arguments. :)
>
> D.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Resolved] (SOLR-8374) Issue with _text_ field in schema file

2015-12-04 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-8374.
-
Resolution: Duplicate

Hi Romit,

This looks like a duplicate of SOLR-7108 which was fixed in Solr 5.3

> Issue with _text_ field in schema file
> --
>
> Key: SOLR-8374
> URL: https://issues.apache.org/jira/browse/SOLR-8374
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration
>Affects Versions: 5.2.1
>Reporter: Romit Singhai
>Priority: Critical
>  Labels: patch
>
> In the data_driven_schema_configs, the warning say that _text_ field  can be 
> removed if not needed. The  hadoop indexer fails  to index data  as ping 
> command could not find the collection required for indexing.
> The ping command for collection needs to be fixed (making _text_ optional) as 
> _text_ add significantly to index size even if not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-6271: ConjunctionSolrSpellChecker w...

2015-12-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/135


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: A 5.4 release?

2015-12-04 Thread Michael McCandless
On Fri, Dec 4, 2015 at 11:34 AM, Upayavira  wrote:

> As a first time Release Manager

Thanks Upayavira!

But please, please, please take advantage of your newness to this, to
edit https://wiki.apache.org/lucene-java/ReleaseTodo when things are
confusing/missing/etc.!

Being new to something is unfortunately rare and we all quickly become
"release blind" after doing a couple releases.

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8371) Try and prevent too many recovery requests from stacking up and clean up some faulty logic.

2015-12-04 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8371:
--
Attachment: SOLR-8371.patch

> Try and prevent too many recovery requests from stacking up and clean up some 
> faulty logic.
> ---
>
> Key: SOLR-8371
> URL: https://issues.apache.org/jira/browse/SOLR-8371
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8371.patch, SOLR-8371.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6271) ConjunctionSolrSpellChecker wrong check for same string distance

2015-12-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041891#comment-15041891
 ] 

ASF GitHub Bot commented on SOLR-6271:
--

Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/135


> ConjunctionSolrSpellChecker wrong check for same string distance
> 
>
> Key: SOLR-6271
> URL: https://issues.apache.org/jira/browse/SOLR-6271
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
>Reporter: Igor Kostromin
>Assignee: James Dyer
> Attachments: SOLR-6271.patch, SOLR-6271.patch
>
>
> See ConjunctionSolrSpellChecker.java
> try {
>   if (stringDistance == null) {
> stringDistance = checker.getStringDistance();
>   } else if (stringDistance != checker.getStringDistance()) {
> throw new IllegalArgumentException(
> "All checkers need to use the same StringDistance.");
>   }
> } catch (UnsupportedOperationException uoe) {
>   // ignore
> }
> In line stringDistance != checker.getStringDistance() there is comparing by 
> references. So if you are using 2 or more spellcheckers with same distance 
> algorithm, exception will be thrown anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6919) Change the Scorer API to expose an iterator instead of extending DocIdSetIterator

2015-12-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6919:
-
Attachment: LUCENE-6919.patch

Here is the (hacky) patch that I used for the benchmark.

This would be a fairly large change, so I'd like to get feedback before trying 
to actually do it. If you don't like this new API, please let me know.

> Change the Scorer API to expose an iterator instead of extending 
> DocIdSetIterator
> -
>
> Key: LUCENE-6919
> URL: https://issues.apache.org/jira/browse/LUCENE-6919
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6919.patch
>
>
> I was working on trying to address the performance regression on LUCENE-6815 
> but this is hard to do without introducing specialization of 
> DisjunctionScorer which I'd like to avoid at all costs.
> I think the performance regression would be easy to address without 
> specialization if Scorers were changed to return an iterator instead of 
> extending DocIdSetIterator. So conceptually the API would move from
> {code}
> class Scorer extends DocIdSetIterator {
> }
> {code}
> to
> {code}
> class Scorer {
>   DocIdSetIterator iterator();
> }
> {code}
> This would help me because then if none of the sub clauses support two-phase 
> iteration, DisjunctionScorer could directly return the approximation as an 
> iterator instead of having to check if twoPhase == null at every iteration.
> Such an approach could also help remove some method calls. For instance 
> TermScorer.nextDoc calls PostingsEnum.nextDoc but with this change 
> TermScorer.iterator() could return the PostingsEnum and TermScorer would not 
> even appear in stack traces when scoring. I hacked a patch to see how much 
> that would help and luceneutil seems to like the change:
> {noformat}
> TaskQPS baseline  StdDev   QPS patch  StdDev  
>   Pct diff
>   Fuzzy1   88.54 (15.7%)   86.73 (16.6%)   
> -2.0% ( -29% -   35%)
>   AndHighLow  698.98  (4.1%)  691.11  (5.1%)   
> -1.1% (  -9% -8%)
>   Fuzzy2   26.47 (11.2%)   26.28 (10.3%)   
> -0.7% ( -19% -   23%)
>  MedSpanNear  141.03  (3.3%)  140.51  (3.2%)   
> -0.4% (  -6% -6%)
>   HighPhrase   60.66  (2.6%)   60.48  (3.3%)   
> -0.3% (  -5% -5%)
>  LowSpanNear   29.25  (2.4%)   29.21  (2.1%)   
> -0.1% (  -4% -4%)
>MedPhrase   28.32  (1.9%)   28.28  (2.0%)   
> -0.1% (  -3% -3%)
>LowPhrase   17.31  (2.1%)   17.29  (2.6%)   
> -0.1% (  -4% -4%)
> HighSloppyPhrase   10.93  (6.0%)   10.92  (6.0%)   
> -0.1% ( -11% -   12%)
>  MedSloppyPhrase   72.21  (2.2%)   72.27  (1.8%)
> 0.1% (  -3% -4%)
>  Respell   57.35  (3.2%)   57.41  (3.4%)
> 0.1% (  -6% -6%)
> HighSpanNear   26.71  (3.0%)   26.75  (2.5%)
> 0.1% (  -5% -5%)
> OrNotHighLow  803.46  (3.4%)  807.03  (4.2%)
> 0.4% (  -6% -8%)
>  LowSloppyPhrase   88.02  (3.4%)   88.77  (2.5%)
> 0.8% (  -4% -7%)
> OrNotHighMed  200.45  (2.7%)  203.83  (2.5%)
> 1.7% (  -3% -7%)
>   OrHighHigh   38.98  (7.9%)   40.30  (6.6%)
> 3.4% ( -10% -   19%)
> HighTerm   92.53  (5.3%)   95.94  (5.8%)
> 3.7% (  -7% -   15%)
>OrHighMed   53.80  (7.7%)   55.79  (6.6%)
> 3.7% (  -9% -   19%)
>   AndHighMed  266.69  (1.7%)  277.15  (2.5%)
> 3.9% (   0% -8%)
>  Prefix3   44.68  (5.4%)   46.60  (7.0%)
> 4.3% (  -7% -   17%)
>  MedTerm  261.52  (4.9%)  273.52  (5.4%)
> 4.6% (  -5% -   15%)
> Wildcard   42.39  (6.1%)   44.35  (7.8%)
> 4.6% (  -8% -   19%)
>   IntNRQ   10.46  (7.0%)   10.99  (9.5%)
> 5.0% ( -10% -   23%)
>OrNotHighHigh   67.15  (4.6%)   70.65  (4.5%)
> 5.2% (  -3% -   15%)
>OrHighNotHigh   43.07  (5.1%)   45.36  (5.4%)
> 5.3% (  -4% -   16%)
>OrHighLow   64.19  (6.4%)   67.72  (5.5%)
> 5.5% (  -6% -   18%)
>  AndHighHigh   64.17  (2.3%)   67.87  (2.1%)
> 5.8% (   1% -   10%)
>  LowTerm  

[jira] [Resolved] (LUCENE-6910) fix 2 interesting and 2 trivial issues found by "Coverity scan results of Lucene"

2015-12-04 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved LUCENE-6910.
-
   Resolution: Fixed
Fix Version/s: 5.5
   Trunk

> fix 2 interesting and 2 trivial issues found by "Coverity scan results of 
> Lucene"
> -
>
> Key: LUCENE-6910
> URL: https://issues.apache.org/jira/browse/LUCENE-6910
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: Trunk, 5.5
>
> Attachments: LUCENE-6910.patch, LUCENE-6910.patch
>
>
> https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> * coverity CID 119973
> * coverity CID 120040
> * coverity CID 120081
> * coverity CID 120628



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6910) fix 2 interesting and 2 trivial issues found by "Coverity scan results of Lucene"

2015-12-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041965#comment-15041965
 ] 

ASF subversion and git services commented on LUCENE-6910:
-

Commit 1718007 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1718007 ]

LUCENE-6910: fix 'if ... > Integer.MAX_VALUE' check in 
(Binary|Numeric)DocValuesFieldUpdates.merge 
(https://scan.coverity.com/projects/5620 CID 119973 and CID 120081) (merge in 
revision 1717993 from trunk)

> fix 2 interesting and 2 trivial issues found by "Coverity scan results of 
> Lucene"
> -
>
> Key: LUCENE-6910
> URL: https://issues.apache.org/jira/browse/LUCENE-6910
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: LUCENE-6910.patch, LUCENE-6910.patch
>
>
> https://scan.coverity.com/projects/5620 mentioned on the dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/lucene-dev/201507.mbox/%3ccaftwexg51-jm_6mdeoz1reagn3xgkbetoz5ou_f+melboo1...@mail.gmail.com%3e)
>  in July 2015:
> * coverity CID 119973
> * coverity CID 120040
> * coverity CID 120081
> * coverity CID 120628



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6919) Change the Scorer API to expose an iterator instead of extending DocIdSetIterator

2015-12-04 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6919:


 Summary: Change the Scorer API to expose an iterator instead of 
extending DocIdSetIterator
 Key: LUCENE-6919
 URL: https://issues.apache.org/jira/browse/LUCENE-6919
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


I was working on trying to address the performance regression on LUCENE-6815 
but this is hard to do without introducing specialization of DisjunctionScorer 
which I'd like to avoid at all costs.

I think the performance regression would be easy to address without 
specialization if Scorers were changed to return an iterator instead of 
extending DocIdSetIterator. So conceptually the API would move from

{code}
class Scorer extends DocIdSetIterator {
}
{code}

to

{code}
class Scorer {
  DocIdSetIterator iterator();
}
{code}

This would help me because then if none of the sub clauses support two-phase 
iteration, DisjunctionScorer could directly return the approximation as an 
iterator instead of having to check if twoPhase == null at every iteration.

Such an approach could also help remove some method calls. For instance 
TermScorer.nextDoc calls PostingsEnum.nextDoc but with this change 
TermScorer.iterator() could return the PostingsEnum and TermScorer would not 
even appear in stack traces when scoring. I hacked a patch to see how much that 
would help and luceneutil seems to like the change:

{noformat}
TaskQPS baseline  StdDev   QPS patch  StdDev
Pct diff
  Fuzzy1   88.54 (15.7%)   86.73 (16.6%)   
-2.0% ( -29% -   35%)
  AndHighLow  698.98  (4.1%)  691.11  (5.1%)   
-1.1% (  -9% -8%)
  Fuzzy2   26.47 (11.2%)   26.28 (10.3%)   
-0.7% ( -19% -   23%)
 MedSpanNear  141.03  (3.3%)  140.51  (3.2%)   
-0.4% (  -6% -6%)
  HighPhrase   60.66  (2.6%)   60.48  (3.3%)   
-0.3% (  -5% -5%)
 LowSpanNear   29.25  (2.4%)   29.21  (2.1%)   
-0.1% (  -4% -4%)
   MedPhrase   28.32  (1.9%)   28.28  (2.0%)   
-0.1% (  -3% -3%)
   LowPhrase   17.31  (2.1%)   17.29  (2.6%)   
-0.1% (  -4% -4%)
HighSloppyPhrase   10.93  (6.0%)   10.92  (6.0%)   
-0.1% ( -11% -   12%)
 MedSloppyPhrase   72.21  (2.2%)   72.27  (1.8%)
0.1% (  -3% -4%)
 Respell   57.35  (3.2%)   57.41  (3.4%)
0.1% (  -6% -6%)
HighSpanNear   26.71  (3.0%)   26.75  (2.5%)
0.1% (  -5% -5%)
OrNotHighLow  803.46  (3.4%)  807.03  (4.2%)
0.4% (  -6% -8%)
 LowSloppyPhrase   88.02  (3.4%)   88.77  (2.5%)
0.8% (  -4% -7%)
OrNotHighMed  200.45  (2.7%)  203.83  (2.5%)
1.7% (  -3% -7%)
  OrHighHigh   38.98  (7.9%)   40.30  (6.6%)
3.4% ( -10% -   19%)
HighTerm   92.53  (5.3%)   95.94  (5.8%)
3.7% (  -7% -   15%)
   OrHighMed   53.80  (7.7%)   55.79  (6.6%)
3.7% (  -9% -   19%)
  AndHighMed  266.69  (1.7%)  277.15  (2.5%)
3.9% (   0% -8%)
 Prefix3   44.68  (5.4%)   46.60  (7.0%)
4.3% (  -7% -   17%)
 MedTerm  261.52  (4.9%)  273.52  (5.4%)
4.6% (  -5% -   15%)
Wildcard   42.39  (6.1%)   44.35  (7.8%)
4.6% (  -8% -   19%)
  IntNRQ   10.46  (7.0%)   10.99  (9.5%)
5.0% ( -10% -   23%)
   OrNotHighHigh   67.15  (4.6%)   70.65  (4.5%)
5.2% (  -3% -   15%)
   OrHighNotHigh   43.07  (5.1%)   45.36  (5.4%)
5.3% (  -4% -   16%)
   OrHighLow   64.19  (6.4%)   67.72  (5.5%)
5.5% (  -6% -   18%)
 AndHighHigh   64.17  (2.3%)   67.87  (2.1%)
5.8% (   1% -   10%)
 LowTerm  642.94 (10.9%)  681.48  (8.5%)
6.0% ( -12% -   28%)
OrHighNotMed   12.68  (6.9%)   13.51  (6.6%)
6.5% (  -6% -   21%)
OrHighNotLow   54.69  (6.8%)   58.25  (7.0%)
6.5% (  -6% -   21%)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6271) ConjunctionSolrSpellChecker wrong check for same string distance

2015-12-04 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer resolved SOLR-6271.
--
   Resolution: Fixed
Fix Version/s: 5.5

Thanks Igor & Fabiano for reporting this one.

> ConjunctionSolrSpellChecker wrong check for same string distance
> 
>
> Key: SOLR-6271
> URL: https://issues.apache.org/jira/browse/SOLR-6271
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
>Reporter: Igor Kostromin
>Assignee: James Dyer
> Fix For: 5.5
>
> Attachments: SOLR-6271.patch, SOLR-6271.patch
>
>
> See ConjunctionSolrSpellChecker.java
> try {
>   if (stringDistance == null) {
> stringDistance = checker.getStringDistance();
>   } else if (stringDistance != checker.getStringDistance()) {
> throw new IllegalArgumentException(
> "All checkers need to use the same StringDistance.");
>   }
> } catch (UnsupportedOperationException uoe) {
>   // ignore
> }
> In line stringDistance != checker.getStringDistance() there is comparing by 
> references. So if you are using 2 or more spellcheckers with same distance 
> algorithm, exception will be thrown anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >