[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_92) - Build # 249 - Still Failing!

2016-06-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/249/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 2 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 2 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([85DB1BA594E5B03A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12331 lines...]
   [junit4] Suite: org.apache.solr.schema.TestManagedSchemaAPI
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_85DB1BA594E5B03A-001\init-core-data-001
   [junit4]   2> 2925746 INFO  
(SUITE-TestManagedSchemaAPI-seed#[85DB1BA594E5B03A]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 2925748 INFO  
(SUITE-TestManagedSchemaAPI-seed#[85DB1BA594E5B03A]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2925748 INFO  (Thread-8763) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2925748 INFO  (Thread-8763) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2925848 INFO  
(SUITE-TestManagedSchemaAPI-seed#[85DB1BA594E5B03A]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:65232
   [junit4]   2> 2925848 INFO  
(SUITE-TestManagedSchemaAPI-seed#[85DB1BA594E5B03A]-worker) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2925850 INFO  
(SUITE-TestManagedSchemaAPI-seed#[85DB1BA594E5B03A]-worker) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 2925855 INFO  (zkCallback-10165-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@74112a43 
name:ZooKeeperConnection Watcher:127.0.0.1:65232 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 2925855 INFO  
(SUITE-TestManagedSchemaAPI-seed#[85DB1BA594E5B03A]-worker) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 2925855 INFO  
(SUITE-TestManagedSchemaAPI-seed#[85DB1BA594E5B03A]-worker) [

[JENKINS] Lucene-Solr-6.1-Windows (64bit/jdk1.8.0_92) - Build # 12 - Failure!

2016-06-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Windows/12/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=5331, 
name=SocketProxy-Response-53108:53748, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=5331, name=SocketProxy-Response-53108:53748, 
state=RUNNABLE, group=TGRP-HttpPartitionTest]
at 
__randomizedtesting.SeedInfo.seed([C2B2D82FEC20C39:847F1258503E61C1]:0)
Caused by: java.lang.RuntimeException: java.net.SocketException: Socket is 
closed
at __randomizedtesting.SeedInfo.seed([C2B2D82FEC20C39]:0)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:347)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1137)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:344)




Build Log:
[...truncated 10992 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-6.1-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.HttpPartitionTest_C2B2D82FEC20C39-001\init-core-data-001
   [junit4]   2> 607215 INFO  
(SUITE-HttpPartitionTest-seed#[C2B2D82FEC20C39]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
   [junit4]   2> 607215 INFO  
(SUITE-HttpPartitionTest-seed#[C2B2D82FEC20C39]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 607219 INFO  
(TEST-HttpPartitionTest.test-seed#[C2B2D82FEC20C39]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 607220 INFO  (Thread-1678) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 607220 INFO  (Thread-1678) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 607320 INFO  
(TEST-HttpPartitionTest.test-seed#[C2B2D82FEC20C39]) [] 
o.a.s.c.ZkTestServer start zk server on port:53019
   [junit4]   2> 607320 INFO  
(TEST-HttpPartitionTest.test-seed#[C2B2D82FEC20C39]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 607321 INFO  
(TEST-HttpPartitionTest.test-seed#[C2B2D82FEC20C39]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 607326 INFO  (zkCallback-873-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@5e8a6fc9 
name:ZooKeeperConnection Watcher:127.0.0.1:53019 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 607326 INFO  
(TEST-HttpPartitionTest.test-seed#[C2B2D82FEC20C39]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 607326 INFO  
(TEST-HttpPartitionTest.test-seed#[C2B2D82FEC20C39]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 607326 INFO  
(TEST-HttpPartitionTest.test-seed#[C2B2D82FEC20C39]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 607334 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] 
o.a.z.s.NIOServerCnxn caught end of stream exception
   [junit4]   2> EndOfStreamException: Unable to read additional data from 
client sessionid 0x1555232d3ed, likely client has closed socket
   [junit4]   2>at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
   [junit4]   2>at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
   [junit4]   2>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> 607337 INFO  
(TEST-HttpPartitionTest.test-seed#[C2B2D82FEC20C39]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 607338 INFO  
(TEST-HttpPartitionTest.test-seed#[C2B2D82FEC20C39]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 607340 INFO  (zkCallback-874-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@1627d7f7 
name:ZooKeeperConnection Watcher:127.0.0.1:53019/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 607340 INFO  
(TEST-HttpPartitionTest.test-seed#[C2B2D82FEC20C39]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 607340 INFO  
(TEST-HttpPartitionTest.test-seed#[C2B2D82FEC20C39]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 607340 INFO  
(TEST-HttpPartitionTest.test-seed#[C2B2D82FEC20C39]) [] 
o.a.s.c.c.SolrZkClient makePath: /collections/collection1
   [junit4]   2> 607347 INFO  

[jira] [Commented] (SOLR-8096) Major faceting performance regressions

2016-06-14 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15331106#comment-15331106
 ] 

David Smiley commented on SOLR-8096:


bq. A work-around could be to force UIF if you have selected FC/FCS without 
docValues.

+1.  Then it's just as before (in 4x); no?

Separately it'd be nice if debug output showed which method was chosen.

> Major faceting performance regressions
> --
>
> Key: SOLR-8096
> URL: https://issues.apache.org/jira/browse/SOLR-8096
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3, 6.0
>Reporter: Yonik Seeley
>Priority: Critical
> Attachments: simple_facets.diff
>
>
> Use of the highly optimized faceting that Solr had for multi-valued fields 
> over relatively static indexes was removed as part of LUCENE-5666, causing 
> severe performance regressions.
> Here are some quick benchmarks to gauge the damage, on a 5M document index, 
> with each field having between 0 and 5 values per document.  *Higher numbers 
> represent worse 5x performance*.
> Solr 5.4_dev faceting time as a percent of Solr 4.10.3 faceting time  
> ||...|| Percent of index being faceted
> ||num_unique_values|| 10% || 50% || 90% ||
> |10   | 351.17%   | 1587.08%  | 3057.28% |
> |100  | 158.10%   | 203.61%   | 1421.93% |
> |1000 | 143.78%   | 168.01%   | 1325.87% |
> |1| 137.98%   | 175.31%   | 1233.97% |
> |10   | 142.98%   | 159.42%   | 1252.45% |
> |100  | 255.15%   | 165.17%   | 1236.75% |
> For example, a field with 1000 unique values in the whole index, faceting 
> with 5x took 143% of the 4x time, when ~10% of the docs in the index were 
> faceted.
> One user who brought the performance problem to our attention: 
> http://markmail.org/message/ekmqh4ocbkwxv3we
> "faceting is unusable slow since upgrade to 5.3.0" (from 4.10.3)
> The disabling of the UnInvertedField algorithm was previously discovered in 
> SOLR-7190, but we didn't know just how bad the problem was at that time.
> edit: removed "secret" adverb by request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1042 - Still Failing

2016-06-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1042/

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [SolrCore, 
MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [SolrCore, MockDirectoryWrapper, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor]
at __randomizedtesting.SeedInfo.seed([82EE474ACA850672]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:256)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=1177, name=searcherExecutor-387-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=1177, name=searcherExecutor-387-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 

Re: [VOTE] Release Lucene/Solr 6.1.0 RC1

2016-06-14 Thread Tommaso Teofili
+1

SUCCESS! [2:01:39.992586
Regards,
Tommaso


Il giorno mar 14 giu 2016 alle ore 21:14 Michael McCandless <
luc...@mikemccandless.com> ha scritto:

> +1
>
> SUCCESS! [0:43:53.129429]
>
> I also edited the Lucene release notes a bit ...
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Tue, Jun 14, 2016 at 9:48 AM, David Smiley 
> wrote:
>
>> +1 SUCCESS!  SUCCESS! [0:50:54.900220]
>>
>> On Tue, Jun 14, 2016 at 7:32 AM Martijn v Groningen <
>> martijn.v.gronin...@gmail.com> wrote:
>>
>>> +1 SUCCESS! [0:40:22.702419]
>>>
>>> On 14 June 2016 at 02:39, Steve Rowe  wrote:
>>>
 I’ve committed fixes for all three problems.

 --
 Steve
 www.lucidworks.com

 > On Jun 13, 2016, at 2:46 PM, Steve Rowe  wrote:
 >
 > Smoke tester was happy: SUCCESS! [0:23:40.900240]
 >
 > Except for the below-described minor issues: changes, docs and
 javadocs look good:
 >
 > * Broken description section links from documentation to javadocs <
 https://issues.apache.org/jira/browse/LUCENE-7338>
 > * Solr’s CHANGES.txt is missing a “Versions of Major Components”
 section.
 > * Solr’s Changes.html has a section "Upgrading from Solr any prior
 release” that is not formatted properly (the hyphens are put into a bullet
 item below)
 >
 > +0 to release.  I’ll work on the above and backport to the 6.1
 branch, in case there is another RC.
 >
 > --
 > Steve
 > www.lucidworks.com
 >
 >> On Jun 13, 2016, at 5:15 AM, Adrien Grand  wrote:
 >>
 >> Please vote for release candidate 1 for Lucene/Solr 6.1.0
 >>
 >>
 >> The artifacts can be downloaded from:
 >>
 >>
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
 >>
 >> You can run the smoke tester directly with this command:
 >>
 >>
 >> python3 -u dev-tools/scripts/smokeTestRelease.py \
 >>
 >>
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
 >> Here is my +1.
 >> SUCCESS! [0:36:57.750669]
 >


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


>>>
>>>
>>> --
>>> Met vriendelijke groet,
>>>
>>> Martijn van Groningen
>>>
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
>
>


[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+122) - Build # 16988 - Still Failing!

2016-06-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16988/
Java: 32bit/jdk-9-ea+122 -client -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestSolrConfigHandlerCloud: 1) Thread[id=15190, 
name=Thread-4413, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud]   
  at java.lang.Thread.sleep(java.base@9-ea/Native Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) 
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
 at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
 at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:107)
 at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)   
  at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:935) 
at org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2509)   
  at org.apache.solr.core.SolrCore$$Lambda$212/3554339.run(Unknown Source)  
   at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2405)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestSolrConfigHandlerCloud: 
   1) Thread[id=15190, name=Thread-4413, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud]
at java.lang.Thread.sleep(java.base@9-ea/Native Method)
at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333)
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:107)
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:935)
at 
org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2509)
at org.apache.solr.core.SolrCore$$Lambda$212/3554339.run(Unknown Source)
at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2405)
at __randomizedtesting.SeedInfo.seed([EAFE0AECDDACE38D]:0)




Build Log:
[...truncated 12218 lines...]
   [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerCloud
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.handler.TestSolrConfigHandlerCloud_EAFE0AECDDACE38D-001/init-core-data-001
   [junit4]   2> 2003310 INFO  
(SUITE-TestSolrConfigHandlerCloud-seed#[EAFE0AECDDACE38D]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 2003311 INFO  
(SUITE-TestSolrConfigHandlerCloud-seed#[EAFE0AECDDACE38D]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /n/h
   [junit4]   2> 2003312 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[EAFE0AECDDACE38D]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2003313 INFO  (Thread-4207) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2003313 INFO  (Thread-4207) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2003413 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[EAFE0AECDDACE38D]) [] 
o.a.s.c.ZkTestServer start zk server on port:35582
   [junit4]   2> 2003413 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[EAFE0AECDDACE38D]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2003414 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[EAFE0AECDDACE38D]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 2003415 INFO  (zkCallback-2422-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@89e634 name:ZooKeeperConnection 
Watcher:127.0.0.1:35582 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 2003415 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[EAFE0AECDDACE38D]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 2003416 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[EAFE0AECDDACE38D]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 2003416 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[EAFE0AECDDACE38D]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   

[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-06-14 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15330923#comment-15330923
 ] 

Hoss Man commented on SOLR-5944:





I know Ishan has been working on improving the tests based on my last batch of 
feedback -- since then I've been reviewing the _non test_ changes in the 
lastest patch.

Here's my notes about specific class/methods as I reviewed them in 
individually


{panel:title=JettySolrRunner}
* javadocs, javadocs, javadocs
{panel}

{panel:title=XMLLoader + JavabinLoader}
* why is this param checks logic duplicated in these classes?
* why not put this in DUP (which already has access to the request params) when 
it's doing it's "FROMLEADER" logic?
{panel}

{panel:title=AddUpdateCommand}
* these variables (like all variables) should have javadocs explaining what 
they are and what they mean
** people skimming a class shouldn't have to grep the code for a variable name 
to understand it's purpose
* having 2 variables here seems like it might be error prone?  what does it 
mean if {{prevVersion < 0 && isInPlaceUpdate == true}} ? or {{0 < prevVersion 
&& isInPlaceUpdate == false}} ?
** would it make more sense to use a single {{long prevVersion}} variable and 
have a {{public boolean isInPlaceUpdate()}} that simply does {{return (0 < 
prevVersion); }} ?
{panel}

{panel:title=TransactionLog}
* javadocs for both the new {{write}} method and the existig {{write}} method
** explain what "prevPointer" means and note in the 2 arg method what the 
effective default "prevPoint" is.
* we should really have some "int" constants for refering to the List indexes 
involved in these records, so instead of code like {{entry.get(3)}} sprinkled 
in various classes like UpdateLog and PeerSync it can be smething more readable 
like {{entry.get(PREV_VERSION_IDX)}}
{panel}


{panel:title=UpdateLog}
* javadocs for both the new {{LogPtr}} constructure and the existing constructor
** explain what "prevPointer" means and note in the 2 arg constructure what the 
effective default "prevPoint" is.
* {{add(AddUpdateCommand, boolean)}}
** this new code for doing lookups in {{map}}, {{prevMap}} and {{preMap2}} 
seems weird to me (but admitedly i'm not really an expert on UpdateLog in 
general and how these maps are used
** what primarily concerns me is what the expected behavior is if the "id" 
isn't found in any of these maps -- it looks like prevPointer defaults to "-1" 
regardless of whether this is an inplace update ... is that intentional? ... is 
it possible there are older records we will miss and need to flag that?
** ie: do we need to worry about distinguising here between "not an in place 
update, therefore prePointer=-1" vs "is an in place update, but we can't find 
the prevPointer" ??
** assuming this code is correct, it might be a little easier to read if it 
were refactored into something like:{code}
// nocommit: jdocs
private synchronized long getPrevPointerForUpdate(AddUpdateCommand cmd) {
  // note: sync required to ensure maps aren't changed out form under us
  if (cmd.isInPlaceUpdate) {
BytesRef indexedId = cmd.getIndexedId();
for (Map currentMap : Arrays.asList(map, prevMap, 
prevMap2)) {
  LogPtr prevEntry = currentMap.get(indexedId);
  if (null != prevEntry) {
return prevEntry.pointer;
  }
}
  }
  return -1; // default when not inplace, or if we can't find a previous entry
}
{code}
* {{applyPartialUpdates}}
** it seems like this method would be a really good candidate for some direct 
unit testing?
*** ie: construct a synthetic UpdateLog, and confirm applyPartialUpdates does 
the right thing
** the sync block in this method, and how the resulting {{lookupLogs}} list is 
used subsequently, doesn't seem safe to me -- particularly the way 
{{getEntryFromTLog}} calls incref/decref on each TransactionLog as it loops 
over that list...
*** what prevents some other thread from decref'ing one of these TransactionLog 
objects (and possibly auto-closing it) in between the sync block and the incref 
in getEntryFromTLog?
 (most existing usages of TransactionLog.incref() seem to be in blocks that 
sync on the UpdateLog -- and the ones that aren't in sync blocks look sketchy 
to me as well)
*** in general i'm wondering if {{lookupLogs}} should be created outside of the 
while loop, so that there is a consistent set of "logs" for the duration of the 
method call ... what happens right now if some other thread changes 
tlog/prevMapLog/prevMapLog2 in between iterations of the while loop?
** shouldn't we make some sanity check assertions about the results of 
getEntryFromTLog? -- there's an INVALID_STATE if it's not an ADD or a list of 5 
elements, but what about actually asserting that it's either an ADD or an 
UPDATE_INPLACE? ... what about asserting the doc's uniqueKey matches?
*** (because unless i'm missing something, it's possible for 2 docs to have the 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 648 - Failure!

2016-06-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/648/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testNoSsl

Error Message:
Could not load collection from ZK: first_collection

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
first_collection
at 
__randomizedtesting.SeedInfo.seed([368F7DBE9108C640:53759303208FA86]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1047)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:610)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:211)
at 
org.apache.solr.common.cloud.ClusterState.getSlicesMap(ClusterState.java:151)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:153)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:212)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:181)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:145)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testNoSsl(TestMiniSolrCloudClusterSSL.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_92) - Build # 893 - Failure!

2016-06-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/893/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:40083/solr/testschemaapi_shard1_replica1: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:40083/solr/testschemaapi_shard1_replica1: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([717C2E0A7AE9BC67:F92811D0D415D19F]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:697)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Resolved] (LUCENE-7318) Graduate StandardAnalyzer out of analyzers module into core

2016-06-14 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7318.

Resolution: Fixed

> Graduate StandardAnalyzer out of analyzers module into core
> ---
>
> Key: LUCENE-7318
> URL: https://issues.apache.org/jira/browse/LUCENE-7318
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7318.patch
>
>
> Spinoff from LUCENE-7314:
> {{StandardAnalyzer}} has progressed substantially since we broke out the 
> analyzers module ... it now follows a real Unicode standard (UAX #29 Unicode 
> Text Segmentation).  It's also much faster than it used to be, since it 
> switched to JFlex a while back.  Many bug fixes, etc.
> I think it would make a good default for most Lucene users, and we should 
> graduate it from the analyzers module into core, and make it the default for 
> {{IndexWriter}}.
> It's really quite crazy that users must go digging in the analyzers module to 
> get started with Lucene ... we don't make them dig through the codecs module 
> to find a good default codec ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8940) group.sort broken, can through AIOOBE if clause length differs from sort param, or cast exception if datatypes are incompatible with sort clause types

2016-06-14 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-8940.
--
   Resolution: Fixed
Fix Version/s: 6.0.2
   5.5.2
   5.6

> group.sort broken, can through AIOOBE if clause length differs from sort 
> param, or cast exception if datatypes are incompatible with sort clause types
> --
>
> Key: SOLR-8940
> URL: https://issues.apache.org/jira/browse/SOLR-8940
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 5.5, 5.5.1, 6.0, 6.0.1
>Reporter: Henrik
>Assignee: Hoss Man
>Priority: Blocker
>  Labels: 5.5, arrayindexoutofbounds, exception, query, 
> regression, search
> Fix For: 5.6, 6.1, 5.5.2, master (7.0), 6.0.2
>
> Attachments: 
> 0001-SOLR-8940-Avoid-ArrayIndexOutOfBoundsException-in-To.patch, 
> SOLR-8940.patch, schema-types.xml, schema.xml, solr-query-exception.txt, 
> solrconfig.xml
>
>
> We get an ArrayIndexOutOfBoundsException when searching after upgrading to 
> solr 5.5.
> Here's the query:
> {code}
> "params":{
>   "q":"*:*",
>   "group.sort":"priceAmount asc,rnd desc",
>   "indent":"on",
>   "fl":"priceAmount,flightTripId,brand,slob,cabinType,tripDuration",
>   "group.limit":"100",
>   "fq":["searchId:e31a0c58-9056-4297-8d70-049017ba4906",
> "doctype:offer",
> "flightTripId:(DY6020421-SK2360519 OR DY6020421-SK2600519 OR 
> DY6020421-SK2620519 OR DY6020421-SK2740519 OR DY6020421-SK2900519 OR 
> DY6020421-SK2860519 OR DY6040421-SK2380519 OR DY6040421-SK2440519 OR 
> DY6040421-SK2480519 OR DY6040421-SK2520519 OR DY6040421-SK2600519 OR 
> DY6040421-SK2620519 OR DY6040421-SK2720519 OR DY6040421-SK2740519 OR 
> DY6040421-SK2800519 OR DY6040421-SK2840519 OR DY6040421-SK2820519 OR 
> DY6060421-SK2480519 OR DY6060421-SK2740519 OR DY6060421-SK2800519 OR 
> DY6060421-SK2840519 OR DY6060421-SK2900519 OR DY6060421-SK2860519 OR 
> DY6060421-SK2820519 OR DY6080421-SK2440519)",
> "maximumLegDuration:[* TO 180]",
> "departureAirportLeg1:(OSL)",
> "(arrivalAirportLeg2:(OSL) OR (* NOT arrivalAirportLeg2:*))",
> "arrivalAirportLeg1:(BGO)",
> "(departureAirportLeg2:(BGO) OR (* NOT departureAirportLeg2:*))"],
>   "group.ngroups":"true",
>   "wt":"json",
>   "group.field":"flightTripId",
>   "group":"true"}}
> {code}
> And here's the exception:
> {code}
> ERROR [20160404T104846,333] qtp315138752-3037 
> org.apache.solr.servlet.HttpSolrCall - 
> null:java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transformToNativeShardDoc(TopGroupsResultTransformer.java:175)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transformToNative(TopGroupsResultTransformer.java:137)
> at 
> org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:129)
> at 
> org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:750)
> at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:733)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:405)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
> {code}
> The exception is thrown at the last line here 
> (TopGroupsResultTransformer.java line 175):
> {code}
>   protected ScoreDoc[] transformToNativeShardDoc(List 
> documents, Sort groupSort, String shard,
>  IndexSchema schema) {
> [...]
> for (NamedList document : documents) {
>   [...]
>   Object sortValuesVal = document.get("sortValues");
>   if (sortValuesVal != null) {
> sortValues = ((List) sortValuesVal).toArray();
> for (int k = 0; k < sortValues.length; k++) {
>   SchemaField field = groupSort.getSort()[k].getField() != null
>   ? schema.getFieldOrNull(groupSort.getSort()[k].getField()) : 
> null;
> {code}
> It's not obvious to me that {{sortValues.length == 
> groupSort.getSort().length}}, but I guess there's some logic behind it :)
> I have attached the schema and json result.
> The problem disappears when rolling back to 5.4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To 

[jira] [Commented] (LUCENE-7318) Graduate StandardAnalyzer out of analyzers module into core

2016-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15330809#comment-15330809
 ] 

ASF subversion and git services commented on LUCENE-7318:
-

Commit ba922148307248893bf70d02b28efdec9882f348 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ba92214 ]

LUCENE-7318: graduate StandardAnalyzer and make it the default for 
IndexWriterConfig


> Graduate StandardAnalyzer out of analyzers module into core
> ---
>
> Key: LUCENE-7318
> URL: https://issues.apache.org/jira/browse/LUCENE-7318
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7318.patch
>
>
> Spinoff from LUCENE-7314:
> {{StandardAnalyzer}} has progressed substantially since we broke out the 
> analyzers module ... it now follows a real Unicode standard (UAX #29 Unicode 
> Text Segmentation).  It's also much faster than it used to be, since it 
> switched to JFlex a while back.  Many bug fixes, etc.
> I think it would make a good default for most Lucene users, and we should 
> graduate it from the analyzers module into core, and make it the default for 
> {{IndexWriter}}.
> It's really quite crazy that users must go digging in the analyzers module to 
> get started with Lucene ... we don't make them dig through the codecs module 
> to find a good default codec ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8940) group.sort broken, can through AIOOBE if clause length differs from sort param, or cast exception if datatypes are incompatible with sort clause types

2016-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15330806#comment-15330806
 ] 

ASF subversion and git services commented on SOLR-8940:
---

Commit bdab648a4063bbdeda7877353fa25eb49871dbe9 in lucene-solr's branch 
refs/heads/branch_5_5 from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bdab648 ]

SOLR-8940: Fix group.sort option

(cherry picked from commit 18256fc2873f198e8e577c6eb0f337df1d1cda24)


> group.sort broken, can through AIOOBE if clause length differs from sort 
> param, or cast exception if datatypes are incompatible with sort clause types
> --
>
> Key: SOLR-8940
> URL: https://issues.apache.org/jira/browse/SOLR-8940
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 5.5, 5.5.1, 6.0, 6.0.1
>Reporter: Henrik
>Assignee: Hoss Man
>Priority: Blocker
>  Labels: 5.5, arrayindexoutofbounds, exception, query, 
> regression, search
> Fix For: 6.1, master (7.0)
>
> Attachments: 
> 0001-SOLR-8940-Avoid-ArrayIndexOutOfBoundsException-in-To.patch, 
> SOLR-8940.patch, schema-types.xml, schema.xml, solr-query-exception.txt, 
> solrconfig.xml
>
>
> We get an ArrayIndexOutOfBoundsException when searching after upgrading to 
> solr 5.5.
> Here's the query:
> {code}
> "params":{
>   "q":"*:*",
>   "group.sort":"priceAmount asc,rnd desc",
>   "indent":"on",
>   "fl":"priceAmount,flightTripId,brand,slob,cabinType,tripDuration",
>   "group.limit":"100",
>   "fq":["searchId:e31a0c58-9056-4297-8d70-049017ba4906",
> "doctype:offer",
> "flightTripId:(DY6020421-SK2360519 OR DY6020421-SK2600519 OR 
> DY6020421-SK2620519 OR DY6020421-SK2740519 OR DY6020421-SK2900519 OR 
> DY6020421-SK2860519 OR DY6040421-SK2380519 OR DY6040421-SK2440519 OR 
> DY6040421-SK2480519 OR DY6040421-SK2520519 OR DY6040421-SK2600519 OR 
> DY6040421-SK2620519 OR DY6040421-SK2720519 OR DY6040421-SK2740519 OR 
> DY6040421-SK2800519 OR DY6040421-SK2840519 OR DY6040421-SK2820519 OR 
> DY6060421-SK2480519 OR DY6060421-SK2740519 OR DY6060421-SK2800519 OR 
> DY6060421-SK2840519 OR DY6060421-SK2900519 OR DY6060421-SK2860519 OR 
> DY6060421-SK2820519 OR DY6080421-SK2440519)",
> "maximumLegDuration:[* TO 180]",
> "departureAirportLeg1:(OSL)",
> "(arrivalAirportLeg2:(OSL) OR (* NOT arrivalAirportLeg2:*))",
> "arrivalAirportLeg1:(BGO)",
> "(departureAirportLeg2:(BGO) OR (* NOT departureAirportLeg2:*))"],
>   "group.ngroups":"true",
>   "wt":"json",
>   "group.field":"flightTripId",
>   "group":"true"}}
> {code}
> And here's the exception:
> {code}
> ERROR [20160404T104846,333] qtp315138752-3037 
> org.apache.solr.servlet.HttpSolrCall - 
> null:java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transformToNativeShardDoc(TopGroupsResultTransformer.java:175)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transformToNative(TopGroupsResultTransformer.java:137)
> at 
> org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:129)
> at 
> org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:750)
> at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:733)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:405)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
> {code}
> The exception is thrown at the last line here 
> (TopGroupsResultTransformer.java line 175):
> {code}
>   protected ScoreDoc[] transformToNativeShardDoc(List 
> documents, Sort groupSort, String shard,
>  IndexSchema schema) {
> [...]
> for (NamedList document : documents) {
>   [...]
>   Object sortValuesVal = document.get("sortValues");
>   if (sortValuesVal != null) {
> sortValues = ((List) sortValuesVal).toArray();
> for (int k = 0; k < sortValues.length; k++) {
>   SchemaField field = groupSort.getSort()[k].getField() != null
>   ? schema.getFieldOrNull(groupSort.getSort()[k].getField()) : 
> null;
> {code}
> It's not obvious to me that {{sortValues.length == 
> groupSort.getSort().length}}, but I guess there's some logic 

[jira] [Commented] (SOLR-8940) group.sort broken, can through AIOOBE if clause length differs from sort param, or cast exception if datatypes are incompatible with sort clause types

2016-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15330808#comment-15330808
 ] 

ASF subversion and git services commented on SOLR-8940:
---

Commit 240140da0fc833a80eab2130ea117ae4f21e77aa in lucene-solr's branch 
refs/heads/branch_6_0 from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=240140d ]

SOLR-8940: Fix group.sort option

(cherry picked from commit 18256fc2873f198e8e577c6eb0f337df1d1cda24)


> group.sort broken, can through AIOOBE if clause length differs from sort 
> param, or cast exception if datatypes are incompatible with sort clause types
> --
>
> Key: SOLR-8940
> URL: https://issues.apache.org/jira/browse/SOLR-8940
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 5.5, 5.5.1, 6.0, 6.0.1
>Reporter: Henrik
>Assignee: Hoss Man
>Priority: Blocker
>  Labels: 5.5, arrayindexoutofbounds, exception, query, 
> regression, search
> Fix For: 6.1, master (7.0)
>
> Attachments: 
> 0001-SOLR-8940-Avoid-ArrayIndexOutOfBoundsException-in-To.patch, 
> SOLR-8940.patch, schema-types.xml, schema.xml, solr-query-exception.txt, 
> solrconfig.xml
>
>
> We get an ArrayIndexOutOfBoundsException when searching after upgrading to 
> solr 5.5.
> Here's the query:
> {code}
> "params":{
>   "q":"*:*",
>   "group.sort":"priceAmount asc,rnd desc",
>   "indent":"on",
>   "fl":"priceAmount,flightTripId,brand,slob,cabinType,tripDuration",
>   "group.limit":"100",
>   "fq":["searchId:e31a0c58-9056-4297-8d70-049017ba4906",
> "doctype:offer",
> "flightTripId:(DY6020421-SK2360519 OR DY6020421-SK2600519 OR 
> DY6020421-SK2620519 OR DY6020421-SK2740519 OR DY6020421-SK2900519 OR 
> DY6020421-SK2860519 OR DY6040421-SK2380519 OR DY6040421-SK2440519 OR 
> DY6040421-SK2480519 OR DY6040421-SK2520519 OR DY6040421-SK2600519 OR 
> DY6040421-SK2620519 OR DY6040421-SK2720519 OR DY6040421-SK2740519 OR 
> DY6040421-SK2800519 OR DY6040421-SK2840519 OR DY6040421-SK2820519 OR 
> DY6060421-SK2480519 OR DY6060421-SK2740519 OR DY6060421-SK2800519 OR 
> DY6060421-SK2840519 OR DY6060421-SK2900519 OR DY6060421-SK2860519 OR 
> DY6060421-SK2820519 OR DY6080421-SK2440519)",
> "maximumLegDuration:[* TO 180]",
> "departureAirportLeg1:(OSL)",
> "(arrivalAirportLeg2:(OSL) OR (* NOT arrivalAirportLeg2:*))",
> "arrivalAirportLeg1:(BGO)",
> "(departureAirportLeg2:(BGO) OR (* NOT departureAirportLeg2:*))"],
>   "group.ngroups":"true",
>   "wt":"json",
>   "group.field":"flightTripId",
>   "group":"true"}}
> {code}
> And here's the exception:
> {code}
> ERROR [20160404T104846,333] qtp315138752-3037 
> org.apache.solr.servlet.HttpSolrCall - 
> null:java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transformToNativeShardDoc(TopGroupsResultTransformer.java:175)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transformToNative(TopGroupsResultTransformer.java:137)
> at 
> org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:129)
> at 
> org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:750)
> at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:733)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:405)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
> {code}
> The exception is thrown at the last line here 
> (TopGroupsResultTransformer.java line 175):
> {code}
>   protected ScoreDoc[] transformToNativeShardDoc(List 
> documents, Sort groupSort, String shard,
>  IndexSchema schema) {
> [...]
> for (NamedList document : documents) {
>   [...]
>   Object sortValuesVal = document.get("sortValues");
>   if (sortValuesVal != null) {
> sortValues = ((List) sortValuesVal).toArray();
> for (int k = 0; k < sortValues.length; k++) {
>   SchemaField field = groupSort.getSort()[k].getField() != null
>   ? schema.getFieldOrNull(groupSort.getSort()[k].getField()) : 
> null;
> {code}
> It's not obvious to me that {{sortValues.length == 
> groupSort.getSort().length}}, but I guess there's some logic 

[jira] [Commented] (SOLR-8940) group.sort broken, can through AIOOBE if clause length differs from sort param, or cast exception if datatypes are incompatible with sort clause types

2016-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15330807#comment-15330807
 ] 

ASF subversion and git services commented on SOLR-8940:
---

Commit d3a9d03c261907e27c5559affbc4a6d2138add65 in lucene-solr's branch 
refs/heads/branch_5x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d3a9d03 ]

SOLR-8940: Fix group.sort option

(cherry picked from commit 18256fc2873f198e8e577c6eb0f337df1d1cda24)


> group.sort broken, can through AIOOBE if clause length differs from sort 
> param, or cast exception if datatypes are incompatible with sort clause types
> --
>
> Key: SOLR-8940
> URL: https://issues.apache.org/jira/browse/SOLR-8940
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 5.5, 5.5.1, 6.0, 6.0.1
>Reporter: Henrik
>Assignee: Hoss Man
>Priority: Blocker
>  Labels: 5.5, arrayindexoutofbounds, exception, query, 
> regression, search
> Fix For: 6.1, master (7.0)
>
> Attachments: 
> 0001-SOLR-8940-Avoid-ArrayIndexOutOfBoundsException-in-To.patch, 
> SOLR-8940.patch, schema-types.xml, schema.xml, solr-query-exception.txt, 
> solrconfig.xml
>
>
> We get an ArrayIndexOutOfBoundsException when searching after upgrading to 
> solr 5.5.
> Here's the query:
> {code}
> "params":{
>   "q":"*:*",
>   "group.sort":"priceAmount asc,rnd desc",
>   "indent":"on",
>   "fl":"priceAmount,flightTripId,brand,slob,cabinType,tripDuration",
>   "group.limit":"100",
>   "fq":["searchId:e31a0c58-9056-4297-8d70-049017ba4906",
> "doctype:offer",
> "flightTripId:(DY6020421-SK2360519 OR DY6020421-SK2600519 OR 
> DY6020421-SK2620519 OR DY6020421-SK2740519 OR DY6020421-SK2900519 OR 
> DY6020421-SK2860519 OR DY6040421-SK2380519 OR DY6040421-SK2440519 OR 
> DY6040421-SK2480519 OR DY6040421-SK2520519 OR DY6040421-SK2600519 OR 
> DY6040421-SK2620519 OR DY6040421-SK2720519 OR DY6040421-SK2740519 OR 
> DY6040421-SK2800519 OR DY6040421-SK2840519 OR DY6040421-SK2820519 OR 
> DY6060421-SK2480519 OR DY6060421-SK2740519 OR DY6060421-SK2800519 OR 
> DY6060421-SK2840519 OR DY6060421-SK2900519 OR DY6060421-SK2860519 OR 
> DY6060421-SK2820519 OR DY6080421-SK2440519)",
> "maximumLegDuration:[* TO 180]",
> "departureAirportLeg1:(OSL)",
> "(arrivalAirportLeg2:(OSL) OR (* NOT arrivalAirportLeg2:*))",
> "arrivalAirportLeg1:(BGO)",
> "(departureAirportLeg2:(BGO) OR (* NOT departureAirportLeg2:*))"],
>   "group.ngroups":"true",
>   "wt":"json",
>   "group.field":"flightTripId",
>   "group":"true"}}
> {code}
> And here's the exception:
> {code}
> ERROR [20160404T104846,333] qtp315138752-3037 
> org.apache.solr.servlet.HttpSolrCall - 
> null:java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transformToNativeShardDoc(TopGroupsResultTransformer.java:175)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transformToNative(TopGroupsResultTransformer.java:137)
> at 
> org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:129)
> at 
> org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:750)
> at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:733)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:405)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
> {code}
> The exception is thrown at the last line here 
> (TopGroupsResultTransformer.java line 175):
> {code}
>   protected ScoreDoc[] transformToNativeShardDoc(List 
> documents, Sort groupSort, String shard,
>  IndexSchema schema) {
> [...]
> for (NamedList document : documents) {
>   [...]
>   Object sortValuesVal = document.get("sortValues");
>   if (sortValuesVal != null) {
> sortValues = ((List) sortValuesVal).toArray();
> for (int k = 0; k < sortValues.length; k++) {
>   SchemaField field = groupSort.getSort()[k].getField() != null
>   ? schema.getFieldOrNull(groupSort.getSort()[k].getField()) : 
> null;
> {code}
> It's not obvious to me that {{sortValues.length == 
> groupSort.getSort().length}}, but I guess there's some logic 

[jira] [Reopened] (LUCENE-7318) Graduate StandardAnalyzer out of analyzers module into core

2016-06-14 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reopened LUCENE-7318:


Woops, thanks for catching [~steve_rowe]!

> Graduate StandardAnalyzer out of analyzers module into core
> ---
>
> Key: LUCENE-7318
> URL: https://issues.apache.org/jira/browse/LUCENE-7318
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7318.patch
>
>
> Spinoff from LUCENE-7314:
> {{StandardAnalyzer}} has progressed substantially since we broke out the 
> analyzers module ... it now follows a real Unicode standard (UAX #29 Unicode 
> Text Segmentation).  It's also much faster than it used to be, since it 
> switched to JFlex a while back.  Many bug fixes, etc.
> I think it would make a good default for most Lucene users, and we should 
> graduate it from the analyzers module into core, and make it the default for 
> {{IndexWriter}}.
> It's really quite crazy that users must go digging in the analyzers module to 
> get started with Lucene ... we don't make them dig through the codecs module 
> to find a good default codec ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7318) Graduate StandardAnalyzer out of analyzers module into core

2016-06-14 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15330754#comment-15330754
 ] 

Steve Rowe commented on LUCENE-7318:


Mike, you marked this as fixed in 6.2, but AFAICT you didn't commit to 
branch_6x?

> Graduate StandardAnalyzer out of analyzers module into core
> ---
>
> Key: LUCENE-7318
> URL: https://issues.apache.org/jira/browse/LUCENE-7318
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7318.patch
>
>
> Spinoff from LUCENE-7314:
> {{StandardAnalyzer}} has progressed substantially since we broke out the 
> analyzers module ... it now follows a real Unicode standard (UAX #29 Unicode 
> Text Segmentation).  It's also much faster than it used to be, since it 
> switched to JFlex a while back.  Many bug fixes, etc.
> I think it would make a good default for most Lucene users, and we should 
> graduate it from the analyzers module into core, and make it the default for 
> {{IndexWriter}}.
> It's really quite crazy that users must go digging in the analyzers module to 
> get started with Lucene ... we don't make them dig through the codecs module 
> to find a good default codec ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7318) Graduate StandardAnalyzer out of analyzers module into core

2016-06-14 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7318.

Resolution: Fixed

> Graduate StandardAnalyzer out of analyzers module into core
> ---
>
> Key: LUCENE-7318
> URL: https://issues.apache.org/jira/browse/LUCENE-7318
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7318.patch
>
>
> Spinoff from LUCENE-7314:
> {{StandardAnalyzer}} has progressed substantially since we broke out the 
> analyzers module ... it now follows a real Unicode standard (UAX #29 Unicode 
> Text Segmentation).  It's also much faster than it used to be, since it 
> switched to JFlex a while back.  Many bug fixes, etc.
> I think it would make a good default for most Lucene users, and we should 
> graduate it from the analyzers module into core, and make it the default for 
> {{IndexWriter}}.
> It's really quite crazy that users must go digging in the analyzers module to 
> get started with Lucene ... we don't make them dig through the codecs module 
> to find a good default codec ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.5 - Build # 14 - Still Failing

2016-06-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.5/14/

No tests ran.

Build Log:
[...truncated 39773 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (18.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.5.2-src.tgz...
   [smoker] 28.7 MB in 0.03 sec (1112.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.2.tgz...
   [smoker] 63.4 MB in 0.06 sec (1095.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.2.zip...
   [smoker] 73.9 MB in 0.07 sec (1059.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.5.2.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.2.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.2-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 220 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 220 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker]   Backcompat testing not required for release 6.0.1 because 
it's not less than 5.5.2
   [smoker]   Backcompat testing not required for release 6.0.0 because 
it's not less than 5.5.2
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (35.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.5.2-src.tgz...
   [smoker] 37.6 MB in 0.76 sec (49.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.5.2.tgz...
   [smoker] 130.4 MB in 1.74 sec (75.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.5.2.zip...
   [smoker] 138.3 MB in 2.19 sec (63.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.5.2.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.5.2.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.2/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/tmp/unpack/solr-5.5.2/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 

[jira] [Reopened] (SOLR-8940) group.sort broken, can through AIOOBE if clause length differs from sort param, or cast exception if datatypes are incompatible with sort clause types

2016-06-14 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reopened SOLR-8940:
--

Reopening to backport for 6.0.2/5.6/5.5.2

> group.sort broken, can through AIOOBE if clause length differs from sort 
> param, or cast exception if datatypes are incompatible with sort clause types
> --
>
> Key: SOLR-8940
> URL: https://issues.apache.org/jira/browse/SOLR-8940
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 5.5, 5.5.1, 6.0, 6.0.1
>Reporter: Henrik
>Assignee: Hoss Man
>Priority: Blocker
>  Labels: 5.5, arrayindexoutofbounds, exception, query, 
> regression, search
> Fix For: 6.1, master (7.0)
>
> Attachments: 
> 0001-SOLR-8940-Avoid-ArrayIndexOutOfBoundsException-in-To.patch, 
> SOLR-8940.patch, schema-types.xml, schema.xml, solr-query-exception.txt, 
> solrconfig.xml
>
>
> We get an ArrayIndexOutOfBoundsException when searching after upgrading to 
> solr 5.5.
> Here's the query:
> {code}
> "params":{
>   "q":"*:*",
>   "group.sort":"priceAmount asc,rnd desc",
>   "indent":"on",
>   "fl":"priceAmount,flightTripId,brand,slob,cabinType,tripDuration",
>   "group.limit":"100",
>   "fq":["searchId:e31a0c58-9056-4297-8d70-049017ba4906",
> "doctype:offer",
> "flightTripId:(DY6020421-SK2360519 OR DY6020421-SK2600519 OR 
> DY6020421-SK2620519 OR DY6020421-SK2740519 OR DY6020421-SK2900519 OR 
> DY6020421-SK2860519 OR DY6040421-SK2380519 OR DY6040421-SK2440519 OR 
> DY6040421-SK2480519 OR DY6040421-SK2520519 OR DY6040421-SK2600519 OR 
> DY6040421-SK2620519 OR DY6040421-SK2720519 OR DY6040421-SK2740519 OR 
> DY6040421-SK2800519 OR DY6040421-SK2840519 OR DY6040421-SK2820519 OR 
> DY6060421-SK2480519 OR DY6060421-SK2740519 OR DY6060421-SK2800519 OR 
> DY6060421-SK2840519 OR DY6060421-SK2900519 OR DY6060421-SK2860519 OR 
> DY6060421-SK2820519 OR DY6080421-SK2440519)",
> "maximumLegDuration:[* TO 180]",
> "departureAirportLeg1:(OSL)",
> "(arrivalAirportLeg2:(OSL) OR (* NOT arrivalAirportLeg2:*))",
> "arrivalAirportLeg1:(BGO)",
> "(departureAirportLeg2:(BGO) OR (* NOT departureAirportLeg2:*))"],
>   "group.ngroups":"true",
>   "wt":"json",
>   "group.field":"flightTripId",
>   "group":"true"}}
> {code}
> And here's the exception:
> {code}
> ERROR [20160404T104846,333] qtp315138752-3037 
> org.apache.solr.servlet.HttpSolrCall - 
> null:java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transformToNativeShardDoc(TopGroupsResultTransformer.java:175)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transformToNative(TopGroupsResultTransformer.java:137)
> at 
> org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:129)
> at 
> org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:750)
> at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:733)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:405)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
> {code}
> The exception is thrown at the last line here 
> (TopGroupsResultTransformer.java line 175):
> {code}
>   protected ScoreDoc[] transformToNativeShardDoc(List 
> documents, Sort groupSort, String shard,
>  IndexSchema schema) {
> [...]
> for (NamedList document : documents) {
>   [...]
>   Object sortValuesVal = document.get("sortValues");
>   if (sortValuesVal != null) {
> sortValues = ((List) sortValuesVal).toArray();
> for (int k = 0; k < sortValues.length; k++) {
>   SchemaField field = groupSort.getSort()[k].getField() != null
>   ? schema.getFieldOrNull(groupSort.getSort()[k].getField()) : 
> null;
> {code}
> It's not obvious to me that {{sortValues.length == 
> groupSort.getSort().length}}, but I guess there's some logic behind it :)
> I have attached the schema and json result.
> The problem disappears when rolling back to 5.4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, 

[jira] [Assigned] (SOLR-9209) DIH JdbcDataSource - improve extensibility part 2

2016-06-14 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-9209:
--

Assignee: Mikhail Khludnev

> DIH JdbcDataSource - improve extensibility part 2
> -
>
> Key: SOLR-9209
> URL: https://issues.apache.org/jira/browse/SOLR-9209
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Reporter: Kristine Jetzke
>Assignee: Mikhail Khludnev
> Attachments: SOLR-9209.patch
>
>
> This is a follow up to SOLR-8616. Due to changes in SOLR-8612 it's now no 
> longer possible without additional modifications to use a different 
> {{ResultSetIterator}} class. The attached patch solves this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-2605) queryparser parses on whitespace

2016-06-14 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15330012#comment-15330012
 ] 

Otis Gospodnetic edited comment on LUCENE-2605 at 6/14/16 8:31 PM:
---

[~steve_rowe] you are about to become everyone's hero and a household name! :)
Is this going to be in the upcoming 6.1?



was (Author: otis):
[~steve_rowe] you are about to become everyone's here and a household name! :)
Is this going to be in the upcoming 6.1?


> queryparser parses on whitespace
> 
>
> Key: LUCENE-2605
> URL: https://issues.apache.org/jira/browse/LUCENE-2605
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Robert Muir
>Assignee: Steve Rowe
> Attachments: LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch
>
>
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> * n-gram analysis
> * shingles 
> * synonyms (especially multi-word for whitespace-separated languages)
> * languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8648) Support selective clearing up of stored async collection API responses

2016-06-14 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15330498#comment-15330498
 ] 

Anshum Gupta commented on SOLR-8648:


[~varunthacker] yes. I've removed this from both the places.

> Support selective clearing up of stored async collection API responses
> --
>
> Key: SOLR-8648
> URL: https://issues.apache.org/jira/browse/SOLR-8648
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 5.5, 6.0
>
> Attachments: SOLR-8648.patch, SOLR-8648.patch, SOLR-8648.patch, 
> SOLR-8648.patch
>
>
> The only way to clear up stored collection API responses right now is by 
> sending in '-1' as the request id in the REQUESTSTATUS call. It makes a lot 
> of sense to support selective deletion of stored responses so the ids could 
> be reused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-Artifacts-6.x - Build # 86 - Failure

2016-06-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-6.x/86/

No tests ran.

Build Log:
[...truncated 43 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/solr/build.xml:467: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/common-build.xml:2496:
 Can't get https://issues.apache.org/jira/rest/api/2/project/SOLR to 
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/solr/build/solr/src-export/solr/docs/changes/jiraVersionList.json

Total time: 10 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-5.5-Java8 - Build # 28 - Failure

2016-06-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java8/28/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 2 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 2 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([2C2DDE6F9F949669]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasics

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([2C2DDE6F9F949669]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.security.BasicAuthIntegrationTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([2C2DDE6F9F949669]:0)




Build Log:
[...truncated 12191 lines...]
   [junit4] Suite: org.apache.solr.schema.TestManagedSchemaAPI
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5-Java8/solr/build/solr-core/test/J2/temp/solr.schema.TestManagedSchemaAPI_2C2DDE6F9F949669-001/init-core-data-001
   [junit4]   2> 2317766 INFO  
(SUITE-TestManagedSchemaAPI-seed#[2C2DDE6F9F949669]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 2317767 INFO  
(TEST-TestManagedSchemaAPI.test-seed#[2C2DDE6F9F949669]) [] 
o.a.s.SolrTestCaseJ4 ###Starting test
   [junit4]   2> 2317767 INFO  
(TEST-TestManagedSchemaAPI.test-seed#[2C2DDE6F9F949669]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2317768 INFO  (Thread-6185) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2317768 INFO  (Thread-6185) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2317868 INFO  
(TEST-TestManagedSchemaAPI.test-seed#[2C2DDE6F9F949669]) [] 
o.a.s.c.ZkTestServer start zk server on port:44466
   [junit4]   2> 2317868 INFO  
(TEST-TestManagedSchemaAPI.test-seed#[2C2DDE6F9F949669]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2317868 INFO  
(TEST-TestManagedSchemaAPI.test-seed#[2C2DDE6F9F949669]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect 

[jira] [Resolved] (SOLR-9034) Atomic updates not work with CopyField

2016-06-14 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-9034.
--
   Resolution: Fixed
Fix Version/s: 5.5.2
   5.6

Here are the 5.6 and 5.5.2 commits, since the ASF bot doesn't seem to be 
working:
-
branch_5x: SOLR-9034: fix atomic updates for copyField w/ docValues
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/5f91aa95
-
branch_5_5: SOLR-9034: fix atomic updates for copyField w/ docValues
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/599d
-
branch_5_5: SOLR-9034: Add 5.5.2 CHANGES entry
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/6f06e56c
-


> Atomic updates not work with CopyField
> --
>
> Key: SOLR-9034
> URL: https://issues.apache.org/jira/browse/SOLR-9034
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5
>Reporter: Karthik Ramachandran
>Assignee: Yonik Seeley
>  Labels: atomicupdate
> Fix For: 5.6, 6.1, 5.5.2, 6.0.1
>
> Attachments: SOLR-9034.patch, SOLR-9034.patch, SOLR-9034.patch
>
>
> Atomic updates does not work when CopyField has docValues enabled.  Below is 
> the sample schema
> {code:xml|title:schema.xml}
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> {code}
> Below is the exception
> {noformat}
> Caused by: java.lang.IllegalArgumentException: DocValuesField
>  "copy_single_i_dvn" appears more than once in this document 
> (only one value is allowed per field)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_92) - Build # 16987 - Failure!

2016-06-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16987/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [TransactionLog, 
MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [TransactionLog, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([B91A81A8798D8A12]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10998 lines...]
   [junit4] Suite: org.apache.solr.schema.TestManagedSchemaAPI
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.schema.TestManagedSchemaAPI_B91A81A8798D8A12-001/init-core-data-001
   [junit4]   2> 379391 INFO  
(SUITE-TestManagedSchemaAPI-seed#[B91A81A8798D8A12]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 379392 INFO  
(SUITE-TestManagedSchemaAPI-seed#[B91A81A8798D8A12]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 379392 INFO  (Thread-805) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 379392 INFO  (Thread-805) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 379492 INFO  
(SUITE-TestManagedSchemaAPI-seed#[B91A81A8798D8A12]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:39973
   [junit4]   2> 379492 INFO  
(SUITE-TestManagedSchemaAPI-seed#[B91A81A8798D8A12]-worker) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 379493 INFO  
(SUITE-TestManagedSchemaAPI-seed#[B91A81A8798D8A12]-worker) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 379494 INFO  (zkCallback-20440-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@64f72d72 
name:ZooKeeperConnection Watcher:127.0.0.1:39973 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 379494 INFO  
(SUITE-TestManagedSchemaAPI-seed#[B91A81A8798D8A12]-worker) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 

[jira] [Resolved] (LUCENE-6171) Make lucene completely write-once

2016-06-14 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6171.

   Resolution: Fixed
Fix Version/s: 6.2
   master (7.0)

> Make lucene completely write-once
> -
>
> Key: LUCENE-6171
> URL: https://issues.apache.org/jira/browse/LUCENE-6171
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-6171.patch
>
>
> Today, lucene is mostly write-once, but not always, and these are just very 
> exceptional cases. 
> This is an invitation for exceptional bugs: (and we have occasional test 
> failures when doing "no-wait close" because of this). 
> I would prefer it if we didn't try to delete files before we open them for 
> write, and if we opened them with the CREATE_NEW option by default to throw 
> an exception, if the file already exists.
> The trickier parts of the change are going to be IndexFileDeleter and 
> exceptions on merge / CFS construction logic.
> Overall for IndexFileDeleter I think the least invasive option might be to 
> only delete files older than the current commit point? This will ensure that 
> inflateGens() always avoids trying to overwrite any files that were from an 
> aborted segment. 
> For CFS construction/exceptions on merge, we really need to remove the custom 
> "sniping" of index files there and let only IndexFileDeleter delete files. My 
> previous failed approach involved always consistently using 
> TrackingDirectoryWrapper, but it failed, and only in backwards compatibility 
> tests, because of LUCENE-6146 (but i could never figure that out). I am 
> hoping this time I will be successful :)
> Longer term we should think about more simplifications, progress has been 
> made on LUCENE-5987, but I think overall we still try to be a superhero for 
> exceptions on merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.1.0 RC1

2016-06-14 Thread Michael McCandless
+1

SUCCESS! [0:43:53.129429]

I also edited the Lucene release notes a bit ...

Mike McCandless

http://blog.mikemccandless.com

On Tue, Jun 14, 2016 at 9:48 AM, David Smiley 
wrote:

> +1 SUCCESS!  SUCCESS! [0:50:54.900220]
>
> On Tue, Jun 14, 2016 at 7:32 AM Martijn v Groningen <
> martijn.v.gronin...@gmail.com> wrote:
>
>> +1 SUCCESS! [0:40:22.702419]
>>
>> On 14 June 2016 at 02:39, Steve Rowe  wrote:
>>
>>> I’ve committed fixes for all three problems.
>>>
>>> --
>>> Steve
>>> www.lucidworks.com
>>>
>>> > On Jun 13, 2016, at 2:46 PM, Steve Rowe  wrote:
>>> >
>>> > Smoke tester was happy: SUCCESS! [0:23:40.900240]
>>> >
>>> > Except for the below-described minor issues: changes, docs and
>>> javadocs look good:
>>> >
>>> > * Broken description section links from documentation to javadocs <
>>> https://issues.apache.org/jira/browse/LUCENE-7338>
>>> > * Solr’s CHANGES.txt is missing a “Versions of Major Components”
>>> section.
>>> > * Solr’s Changes.html has a section "Upgrading from Solr any prior
>>> release” that is not formatted properly (the hyphens are put into a bullet
>>> item below)
>>> >
>>> > +0 to release.  I’ll work on the above and backport to the 6.1 branch,
>>> in case there is another RC.
>>> >
>>> > --
>>> > Steve
>>> > www.lucidworks.com
>>> >
>>> >> On Jun 13, 2016, at 5:15 AM, Adrien Grand  wrote:
>>> >>
>>> >> Please vote for release candidate 1 for Lucene/Solr 6.1.0
>>> >>
>>> >>
>>> >> The artifacts can be downloaded from:
>>> >>
>>> >>
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
>>> >>
>>> >> You can run the smoke tester directly with this command:
>>> >>
>>> >>
>>> >> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>> >>
>>> >>
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
>>> >> Here is my +1.
>>> >> SUCCESS! [0:36:57.750669]
>>> >
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>
>>
>> --
>> Met vriendelijke groet,
>>
>> Martijn van Groningen
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>


[jira] [Updated] (LUCENE-2605) queryparser parses on whitespace

2016-06-14 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-2605:
---
Fix Version/s: (was: 6.0)
   (was: 4.9)

> queryparser parses on whitespace
> 
>
> Key: LUCENE-2605
> URL: https://issues.apache.org/jira/browse/LUCENE-2605
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Robert Muir
>Assignee: Steve Rowe
> Attachments: LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch
>
>
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> * n-gram analysis
> * shingles 
> * synonyms (especially multi-word for whitespace-separated languages)
> * languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2605) queryparser parses on whitespace

2016-06-14 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15330114#comment-15330114
 ] 

Steve Rowe commented on LUCENE-2605:


No, it will not be in 6.1, but I expect it will make 6.2.

> queryparser parses on whitespace
> 
>
> Key: LUCENE-2605
> URL: https://issues.apache.org/jira/browse/LUCENE-2605
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Robert Muir
>Assignee: Steve Rowe
> Fix For: 4.9, 6.0
>
> Attachments: LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch
>
>
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> * n-gram analysis
> * shingles 
> * synonyms (especially multi-word for whitespace-separated languages)
> * languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_92) - Build # 248 - Still Failing!

2016-06-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/248/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 70226 lines...]
BUILD FAILED
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:740: The following 
error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\build.xml:101: The following 
error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build.xml:632: The 
following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build.xml:607: The 
following error occurred while executing this line:
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\common-build.xml:2496:
 Can't get https://issues.apache.org/jira/rest/api/2/project/SOLR to 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\docs\changes\jiraVersionList.json

Total time: 112 minutes 10 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-6968) LSH Filter

2016-06-14 Thread Andy Hind (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15330032#comment-15330032
 ] 

Andy Hind commented on LUCENE-6968:
---

Hi Tommaso - are you planning to merge this to 6.x?

> LSH Filter
> --
>
> Key: LUCENE-6968
> URL: https://issues.apache.org/jira/browse/LUCENE-6968
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Cao Manh Dat
>Assignee: Tommaso Teofili
> Fix For: master (7.0)
>
> Attachments: LUCENE-6968.4.patch, LUCENE-6968.5.patch, 
> LUCENE-6968.6.patch, LUCENE-6968.patch, LUCENE-6968.patch, LUCENE-6968.patch
>
>
> I'm planning to implement LSH. Which support query like this
> {quote}
> Find similar documents that have 0.8 or higher similar score with a given 
> document. Similarity measurement can be cosine, jaccard, euclid..
> {quote}
> For example. Given following corpus
> {quote}
> 1. Solr is an open source search engine based on Lucene
> 2. Solr is an open source enterprise search engine based on Lucene
> 3. Solr is an popular open source enterprise search engine based on Lucene
> 4. Apache Lucene is a high-performance, full-featured text search engine 
> library written entirely in Java
> {quote}
> We wanna find documents that have 0.6 score in jaccard measurement with this 
> doc
> {quote}
> Solr is an open source search engine
> {quote}
> It will return only docs 1,2 and 3 (MoreLikeThis will also return doc 4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2605) queryparser parses on whitespace

2016-06-14 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15330012#comment-15330012
 ] 

Otis Gospodnetic commented on LUCENE-2605:
--

[~steve_rowe] you are about to become everyone's here and a household name! :)
Is this going to be in the upcoming 6.1?


> queryparser parses on whitespace
> 
>
> Key: LUCENE-2605
> URL: https://issues.apache.org/jira/browse/LUCENE-2605
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Robert Muir
>Assignee: Steve Rowe
> Fix For: 4.9, 6.0
>
> Attachments: LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch
>
>
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> * n-gram analysis
> * shingles 
> * synonyms (especially multi-word for whitespace-separated languages)
> * languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-9034) Atomic updates not work with CopyField

2016-06-14 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reopened SOLR-9034:
--

Reopening to backport to 5.6 and 5.5.2.

> Atomic updates not work with CopyField
> --
>
> Key: SOLR-9034
> URL: https://issues.apache.org/jira/browse/SOLR-9034
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5
>Reporter: Karthik Ramachandran
>Assignee: Yonik Seeley
>  Labels: atomicupdate
> Fix For: 6.0.1, 6.1
>
> Attachments: SOLR-9034.patch, SOLR-9034.patch, SOLR-9034.patch
>
>
> Atomic updates does not work when CopyField has docValues enabled.  Below is 
> the sample schema
> {code:xml|title:schema.xml}
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> indexed="true" stored="true" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> docValues="true" indexed="true" stored="false" useDocValuesAsStored="false" />
> {code}
> Below is the exception
> {noformat}
> Caused by: java.lang.IllegalArgumentException: DocValuesField
>  "copy_single_i_dvn" appears more than once in this document 
> (only one value is allowed per field)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.1-Linux (64bit/jdk1.8.0_92) - Build # 35 - Failure!

2016-06-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Linux/35/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Tue Jun 14 18:32:12 
CEST 2016

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Tue Jun 14 18:32:12 CEST 2016
at 
__randomizedtesting.SeedInfo.seed([EA550648A2D72F4C:31FE068EA7FF46FF]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1508)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:858)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11035 lines...]
   [junit4] Suite: 

[jira] [Commented] (SOLR-9035) New cwiki page: IndexUpgrader

2016-06-14 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329832#comment-15329832
 ] 

Cassandra Targett commented on SOLR-9035:
-

Still thinking about where to put this. The Upgrading Solr page is really a 
release notes kind of page, but the only page we have about upgrading is 
https://cwiki.apache.org/confluence/display/solr/Upgrading+a+Solr+Cluster. 

My current thinking is that I will move it to be a child of the Upgrading a 
Solr Cluster page, and hopefully soon we will be able to make a dedicated 
Upgrade section that fills some of the additional gaps around this topic.

> New cwiki page: IndexUpgrader
> -
>
> Key: SOLR-9035
> URL: https://issues.apache.org/jira/browse/SOLR-9035
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 6.0
>Reporter: Bram Van Dam
>Assignee: Cassandra Targett
>  Labels: documentation
> Attachments: indexupgrader.html
>
>
> The cwiki does not contain any IndexUpgrader documentation, but it is 
> mentioned in passing in the "Major Changes"-pages.
> I'm attaching a file containing some basic usage instructions and adminitions 
> found in the IndexUpgrader javadoc. 
> Once the page is created, it would ideally be linked to from the Major 
> Changes page as well as the Upgrading Solr page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.5-Java7 - Build # 27 - Failure

2016-06-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java7/27/

3 tests failed.
FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithTimeDelay

Error Message:
Could not find collection : c1

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : c1
at 
__randomizedtesting.SeedInfo.seed([24268D0E72E1B317:5BB83A8B1B839E9D]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:129)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithTimeDelay(ZkStateReaderTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasics

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite 

[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.7.0_80) - Build # 281 - Failure!

2016-06-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/281/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:45734/solr/testschemaapi_shard1_replica2: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:45734/solr/testschemaapi_shard1_replica2: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([A6FA5F2CCE9B44E0:2EAE60F660672918]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:632)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:981)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:101)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-14 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329707#comment-15329707
 ] 

Hrishikesh Gadre commented on SOLR-7374:


Looking into this...

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8096) Major faceting performance regressions

2016-06-14 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329683#comment-15329683
 ] 

Alessandro Benedetti commented on SOLR-8096:


Mmmm actually it is still a regression.
If you were using fc/fcs without docValues, you will still see the regression.
A work-around could be to force UIF if you have selected FC/FCS without 
docValues.
But I definitely don't like that much this approach in "hiding" legacy facets 
bugs under forcing of other methods :(

What do you think ?

Cheers

> Major faceting performance regressions
> --
>
> Key: SOLR-8096
> URL: https://issues.apache.org/jira/browse/SOLR-8096
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3, 6.0
>Reporter: Yonik Seeley
>Priority: Critical
> Attachments: simple_facets.diff
>
>
> Use of the highly optimized faceting that Solr had for multi-valued fields 
> over relatively static indexes was removed as part of LUCENE-5666, causing 
> severe performance regressions.
> Here are some quick benchmarks to gauge the damage, on a 5M document index, 
> with each field having between 0 and 5 values per document.  *Higher numbers 
> represent worse 5x performance*.
> Solr 5.4_dev faceting time as a percent of Solr 4.10.3 faceting time  
> ||...|| Percent of index being faceted
> ||num_unique_values|| 10% || 50% || 90% ||
> |10   | 351.17%   | 1587.08%  | 3057.28% |
> |100  | 158.10%   | 203.61%   | 1421.93% |
> |1000 | 143.78%   | 168.01%   | 1325.87% |
> |1| 137.98%   | 175.31%   | 1233.97% |
> |10   | 142.98%   | 159.42%   | 1252.45% |
> |100  | 255.15%   | 165.17%   | 1236.75% |
> For example, a field with 1000 unique values in the whole index, faceting 
> with 5x took 143% of the 4x time, when ~10% of the docs in the index were 
> faceted.
> One user who brought the performance problem to our attention: 
> http://markmail.org/message/ekmqh4ocbkwxv3we
> "faceting is unusable slow since upgrade to 5.3.0" (from 4.10.3)
> The disabling of the UnInvertedField algorithm was previously discovered in 
> SOLR-7190, but we didn't know just how bad the problem was at that time.
> edit: removed "secret" adverb by request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329679#comment-15329679
 ] 

Mark Miller commented on SOLR-7374:
---

Looks like for the test fail we may just have to check the backup status and 
wait in a spot we are not.

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9209) DIH JdbcDataSource - improve extensibility part 2

2016-06-14 Thread Kristine Jetzke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kristine Jetzke updated SOLR-9209:
--
Attachment: SOLR-9209.patch

> DIH JdbcDataSource - improve extensibility part 2
> -
>
> Key: SOLR-9209
> URL: https://issues.apache.org/jira/browse/SOLR-9209
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Reporter: Kristine Jetzke
> Attachments: SOLR-9209.patch
>
>
> This is a follow up to SOLR-8616. Due to changes in SOLR-8612 it's now no 
> longer possible without additional modifications to use a different 
> {{ResultSetIterator}} class. The attached patch solves this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9209) DIH JdbcDataSource - improve extensibility part 2

2016-06-14 Thread Kristine Jetzke (JIRA)
Kristine Jetzke created SOLR-9209:
-

 Summary: DIH JdbcDataSource - improve extensibility part 2
 Key: SOLR-9209
 URL: https://issues.apache.org/jira/browse/SOLR-9209
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Reporter: Kristine Jetzke


This is a follow up to SOLR-8616. Due to changes in SOLR-8612 it's now no 
longer possible without additional modifications to use a different 
{{ResultSetIterator}} class. The attached patch solves this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr Ref Guide for Solr 6.1

2016-06-14 Thread Cassandra Targett
With the vote for Solr 6.1 underway, we're starting to take a look at
what needs to be updated or added to the Ref Guide. I'll RM it again
this time, as long as it's done before I go on vacation in 2 weeks ;-)

I ask everyone to please take a look at items you've worked on and
please consider if they deserve an update to the Ref Guide. In
particular, we seem to be missing so far:

- new streaming expressions (random, sort, shortestPath)
- GeoJSON & other spatial updates

Since we're just getting started with updates, it won't be until maybe
Friday this week (17 June) when I'll want to make the first RC.

The full list of changes we're working through is at
https://cwiki.apache.org/confluence/display/solr/Internal+-+TODO+List.

Please let me know if you have questions -

Cassandra

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-14 Thread Erick Erickson
I can beast it a bit if that's useful


On Tue, Jun 14, 2016 at 7:07 AM, David Smiley (JIRA)  wrote:
>
> [ 
> https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329554#comment-15329554
>  ]
>
> David Smiley commented on SOLR-7374:
> 
>
> bq. One comment: Shouldn't we make BackupRepository an abstract class rather 
> than interface? We can only add default methods in Java 8, but branch6x is 
> still Java 7 right?
>
> No; we're all Java 8 now -- master & 6x.  You must be thinking of 5x which 
> was Java 7.
>
>> Backup/Restore should provide a param for specifying the directory 
>> implementation it should use
>> ---
>>
>> Key: SOLR-7374
>> URL: https://issues.apache.org/jira/browse/SOLR-7374
>> Project: Solr
>>  Issue Type: Bug
>>Reporter: Varun Thacker
>>Assignee: Mark Miller
>> Fix For: 5.2, 6.0
>>
>> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
>> SOLR-7374.patch
>>
>>
>> Currently when we create a backup we use SimpleFSDirectory to write the 
>> backup indexes. Similarly during a restore we open the index using 
>> FSDirectory.open .
>> We should provide a param called {{directoryImpl}} or {{type}} which will be 
>> used to specify the Directory implementation to backup the index.
>> Likewise during a restore you would need to specify the directory impl which 
>> was used during backup so that the index can be opened correctly.
>> This param will address the problem that currently if a user is running Solr 
>> on HDFS there is no way to use the backup/restore functionality as the 
>> directory is hardcoded.
>> With this one could be running Solr on a local FS but backup the index on 
>> HDFS etc.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.1.0

2016-06-14 Thread Erick Erickson
+1

On Tue, Jun 14, 2016 at 7:24 AM, David Smiley  wrote:
> +1
>
> On Tue, Jun 14, 2016 at 4:55 AM Jan Høydahl  wrote:
>>
>>  - https://wiki.apache.org/solr/ReleaseNote61
>>
>>
>> The Solr lead-text in the announcement says:
>>
>> Solr is the popular, blazing fast, open source NoSQL search platform from
>> the Apache Lucene project. Its major features include powerful full-text
>> search, hit highlighting, faceted search, dynamic clustering, database
>> integration, rich document (e.g., Word, PDF) handling, and geospatial
>> search. Solr is highly scalable, providing fault tolerant distributed search
>> and indexing, and powers the search and navigation features of many of the
>> world's largest internet sites.
>>
>>
>> It may be worth to consider flagging some of the newer features such as
>> ParallellSQL, JDBC, CDCR or Security -- perhaps in place of some more
>> obvious feature like clustering or highlighting?
>>
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6171) Make lucene completely write-once

2016-06-14 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329601#comment-15329601
 ] 

Michael McCandless commented on LUCENE-6171:


Thanks [~rcmuir] ... I think we have made progress since then.  I'll clean up 
the patch and make sure tests pass and push so we can get Jenkins chewing on it.

> Make lucene completely write-once
> -
>
> Key: LUCENE-6171
> URL: https://issues.apache.org/jira/browse/LUCENE-6171
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Michael McCandless
> Attachments: LUCENE-6171.patch
>
>
> Today, lucene is mostly write-once, but not always, and these are just very 
> exceptional cases. 
> This is an invitation for exceptional bugs: (and we have occasional test 
> failures when doing "no-wait close" because of this). 
> I would prefer it if we didn't try to delete files before we open them for 
> write, and if we opened them with the CREATE_NEW option by default to throw 
> an exception, if the file already exists.
> The trickier parts of the change are going to be IndexFileDeleter and 
> exceptions on merge / CFS construction logic.
> Overall for IndexFileDeleter I think the least invasive option might be to 
> only delete files older than the current commit point? This will ensure that 
> inflateGens() always avoids trying to overwrite any files that were from an 
> aborted segment. 
> For CFS construction/exceptions on merge, we really need to remove the custom 
> "sniping" of index files there and let only IndexFileDeleter delete files. My 
> previous failed approach involved always consistently using 
> TrackingDirectoryWrapper, but it failed, and only in backwards compatibility 
> tests, because of LUCENE-6146 (but i could never figure that out). I am 
> hoping this time I will be successful :)
> Longer term we should think about more simplifications, progress has been 
> made on LUCENE-5987, but I think overall we still try to be a superhero for 
> exceptions on merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7330) Speed up conjunctions

2016-06-14 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7330.
--
   Resolution: Fixed
Fix Version/s: 6.2
   master (7.0)

> Speed up conjunctions
> -
>
> Key: LUCENE-7330
> URL: https://issues.apache.org/jira/browse/LUCENE-7330
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7330.patch
>
>
> I am digging into some performance regressions between 4.x and 5.x which seem 
> to be due to how we always run conjunctions with ConjunctionDISI now while 
> 4.x had FilteredQuery, which was optimized for the case that there are only 
> two clauses or that one of the clause supports random access. I'd like to 
> explore the former in this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8096) Major faceting performance regressions

2016-06-14 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329588#comment-15329588
 ] 

David Smiley commented on SOLR-8096:


Since facet.method=uif SOLR-8466 and now that facet.method=enum works again 
SOLR-9176 is there anything left to do here or should it be closed?

> Major faceting performance regressions
> --
>
> Key: SOLR-8096
> URL: https://issues.apache.org/jira/browse/SOLR-8096
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3, 6.0
>Reporter: Yonik Seeley
>Priority: Critical
> Attachments: simple_facets.diff
>
>
> Use of the highly optimized faceting that Solr had for multi-valued fields 
> over relatively static indexes was removed as part of LUCENE-5666, causing 
> severe performance regressions.
> Here are some quick benchmarks to gauge the damage, on a 5M document index, 
> with each field having between 0 and 5 values per document.  *Higher numbers 
> represent worse 5x performance*.
> Solr 5.4_dev faceting time as a percent of Solr 4.10.3 faceting time  
> ||...|| Percent of index being faceted
> ||num_unique_values|| 10% || 50% || 90% ||
> |10   | 351.17%   | 1587.08%  | 3057.28% |
> |100  | 158.10%   | 203.61%   | 1421.93% |
> |1000 | 143.78%   | 168.01%   | 1325.87% |
> |1| 137.98%   | 175.31%   | 1233.97% |
> |10   | 142.98%   | 159.42%   | 1252.45% |
> |100  | 255.15%   | 165.17%   | 1236.75% |
> For example, a field with 1000 unique values in the whole index, faceting 
> with 5x took 143% of the 4x time, when ~10% of the docs in the index were 
> faceted.
> One user who brought the performance problem to our attention: 
> http://markmail.org/message/ekmqh4ocbkwxv3we
> "faceting is unusable slow since upgrade to 5.3.0" (from 4.10.3)
> The disabling of the UnInvertedField algorithm was previously discovered in 
> SOLR-7190, but we didn't know just how bad the problem was at that time.
> edit: removed "secret" adverb by request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 6.1.0

2016-06-14 Thread David Smiley
+1

On Tue, Jun 14, 2016 at 4:55 AM Jan Høydahl  wrote:

>  - https://wiki.apache.org/solr/ReleaseNote61
>
>
> The Solr lead-text in the announcement says:
>
> Solr is the popular, blazing fast, open source NoSQL search platform from
> the Apache Lucene project. Its major features include powerful full-text
> search, hit highlighting, faceted search, dynamic clustering, database
> integration, rich document (e.g., Word, PDF) handling, and geospatial
> search. Solr is highly scalable, providing fault tolerant distributed
> search and indexing, and powers the search and navigation features of many
> of the world's largest internet sites.
>
>
> It may be worth to consider flagging some of the newer features such as
> ParallellSQL, JDBC, CDCR or Security -- perhaps in place of some more
> obvious feature like clustering or highlighting?
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Updated] (LUCENE-7335) IndexWriter.setCommitData should be late binding

2016-06-14 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7335:
---
Attachment: LUCENE-7335.patch

Patch, just renaming the method to {{set/getLiveCommitData}} and taking 
{{Iterable}} ...

> IndexWriter.setCommitData should be late binding
> 
>
> Key: LUCENE-7335
> URL: https://issues.apache.org/jira/browse/LUCENE-7335
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7335.patch
>
>
> Today, {{IndexWriter.setCommitData}} is early-binding: as soon as you call 
> it, it clones the provided map and later on when commit is called, it uses 
> that clone.
> But this makes it hard for some use cases where the app needs to record more 
> timely information based on when specifically the commit actually occurs.  
> E.g., with LUCENE-7302, it would be helpful to store the max completed 
> sequence number in the commit point: that would be a lower bound of 
> operations that were after the commit.
> I think the most minimal way to do this would be to upgrade the existing 
> method to take an {{Iterable}}, and document that 
> it's now late binding, i.e. IW will pull an {{Iterator}} from that when it's 
> time to write the segments file.
> Or we could also make an explicit interface that you pass (seems like 
> overkill), or maybe have a listener or something (or you subclass IW) that's 
> invoked when the commit is about to write the segments file, but that also 
> seems like overkill.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7276) Add an optional reason to the MatchNoDocsQuery

2016-06-14 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329556#comment-15329556
 ] 

David Smiley commented on LUCENE-7276:
--

bq.  don't think we have ever, nor should we ever, make a guarantee that 
MatchNoDocsQuery.toString would somehow round-trip through a query parser back 
to itself, and so I think we are free to improve it here/now.

Yeah, +1

> Add an optional reason to the MatchNoDocsQuery
> --
>
> Key: LUCENE-7276
> URL: https://issues.apache.org/jira/browse/LUCENE-7276
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
>  Labels: patch
> Attachments: LUCENE-7276.patch, LUCENE-7276.patch, LUCENE-7276.patch, 
> LUCENE-7276.patch, LUCENE-7276.patch
>
>
> It's sometimes difficult to debug a query that results in a MatchNoDocsQuery. 
> The MatchNoDocsQuery is always rewritten in an empty boolean query.
> This patch adds an optional reason and implements a weight in order to keep 
> track of the reason why the query did not match any document. The reason is 
> printed on toString and when an explanation for noMatch is asked.  
> For instance the query:
> new MatchNoDocsQuery("Field not found").toString()
> => 'MatchNoDocsQuery["field 'title' not found"]'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-14 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329554#comment-15329554
 ] 

David Smiley commented on SOLR-7374:


bq. One comment: Shouldn't we make BackupRepository an abstract class rather 
than interface? We can only add default methods in Java 8, but branch6x is 
still Java 7 right?

No; we're all Java 8 now -- master & 6x.  You must be thinking of 5x which was 
Java 7.

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8048) bin/solr script should accept user name and password for basicauth

2016-06-14 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8048:
-
Fix Version/s: (was: 5.5)
   6.2

> bin/solr script should accept user name and password for basicauth
> --
>
> Key: SOLR-8048
> URL: https://issues.apache.org/jira/browse/SOLR-8048
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: authentication, security
> Fix For: 6.2
>
>
> Should be able to add the line in{{solr.in.sh}} to support basic auth in the 
> {{bin/solr}} script
> {code}
> SOLR_AUTHENTICATION_OPTS="-Dbasicauth=solr:SolrRocks"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8048) bin/solr script should accept user name and password for basicauth

2016-06-14 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8048:
-
Description: 
Should be able to add the line in{{solr.in.sh}} to support basic auth in the 
{{bin/solr}} script
{code}
SOLR_AUTHENTICATION_OPTS="-Dbasicauth=solr:SolrRocks"
{code}

  was:It should be possible to pass the user name as a param say {{-user 
solr:SolrRocks}} or alternately it should prompt for user name and password


> bin/solr script should accept user name and password for basicauth
> --
>
> Key: SOLR-8048
> URL: https://issues.apache.org/jira/browse/SOLR-8048
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: authentication, security
> Fix For: 5.5
>
>
> Should be able to add the line in{{solr.in.sh}} to support basic auth in the 
> {{bin/solr}} script
> {code}
> SOLR_AUTHENTICATION_OPTS="-Dbasicauth=solr:SolrRocks"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.1.0 RC1

2016-06-14 Thread David Smiley
+1 SUCCESS!  SUCCESS! [0:50:54.900220]

On Tue, Jun 14, 2016 at 7:32 AM Martijn v Groningen <
martijn.v.gronin...@gmail.com> wrote:

> +1 SUCCESS! [0:40:22.702419]
>
> On 14 June 2016 at 02:39, Steve Rowe  wrote:
>
>> I’ve committed fixes for all three problems.
>>
>> --
>> Steve
>> www.lucidworks.com
>>
>> > On Jun 13, 2016, at 2:46 PM, Steve Rowe  wrote:
>> >
>> > Smoke tester was happy: SUCCESS! [0:23:40.900240]
>> >
>> > Except for the below-described minor issues: changes, docs and javadocs
>> look good:
>> >
>> > * Broken description section links from documentation to javadocs <
>> https://issues.apache.org/jira/browse/LUCENE-7338>
>> > * Solr’s CHANGES.txt is missing a “Versions of Major Components”
>> section.
>> > * Solr’s Changes.html has a section "Upgrading from Solr any prior
>> release” that is not formatted properly (the hyphens are put into a bullet
>> item below)
>> >
>> > +0 to release.  I’ll work on the above and backport to the 6.1 branch,
>> in case there is another RC.
>> >
>> > --
>> > Steve
>> > www.lucidworks.com
>> >
>> >> On Jun 13, 2016, at 5:15 AM, Adrien Grand  wrote:
>> >>
>> >> Please vote for release candidate 1 for Lucene/Solr 6.1.0
>> >>
>> >>
>> >> The artifacts can be downloaded from:
>> >>
>> >>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
>> >>
>> >> You can run the smoke tester directly with this command:
>> >>
>> >>
>> >> python3 -u dev-tools/scripts/smokeTestRelease.py \
>> >>
>> >>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
>> >> Here is my +1.
>> >> SUCCESS! [0:36:57.750669]
>> >
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>
>
> --
> Met vriendelijke groet,
>
> Martijn van Groningen
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329512#comment-15329512
 ] 

Mark Miller commented on SOLR-7374:
---

Just found the follow fail. Does not seem to be repeatable by seed, must be 
timing or something.

   [junit4] ERROR   0.72s J4  | TestReplicationHandlerBackup.doTestBackup <<<
   [junit4]> Throwable #1: java.util.NoSuchElementException
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([2234E0A12F864773:63BFC0C40838B43C]:0)
   [junit4]>at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.next(UnixDirectoryStream.java:215)
   [junit4]>at 
sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.next(UnixDirectoryStream.java:132)
   [junit4]>at 
org.apache.solr.handler.TestReplicationHandlerBackup.doTestBackup(TestReplicationHandlerBackup.java:174)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> 563888 INFO  
(SUITE-TestReplicationHandlerBackup-seed#[2234E0A12F864773]-worker) [] 
o.a.s.SolrTestCaseJ4 ###deleteCore

   NOTE: reproduce with: ant test  -Dtestcase=TestReplicationHandlerBackup 
-Dtests.method=doTestBackup -Dtests.seed=2234E0A12F864773 -Dtests.slow=true 
-Dtests.locale=sk -Dtests.timezone=Asia/Calcutta -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8648) Support selective clearing up of stored async collection API responses

2016-06-14 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329497#comment-15329497
 ] 

Varun Thacker commented on SOLR-8648:
-

Hi Anshum,

In 6.x + we no longer support this {[action=REQUESTSTATUS=-1}} right?

I saw two references of that usage 
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-RequestStatus
 and 
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-AsynchronousCalls
 . We can safely remove that right? 

> Support selective clearing up of stored async collection API responses
> --
>
> Key: SOLR-8648
> URL: https://issues.apache.org/jira/browse/SOLR-8648
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 5.5, 6.0
>
> Attachments: SOLR-8648.patch, SOLR-8648.patch, SOLR-8648.patch, 
> SOLR-8648.patch
>
>
> The only way to clear up stored collection API responses right now is by 
> sending in '-1' as the request id in the REQUESTSTATUS call. It makes a lot 
> of sense to support selective deletion of stored responses so the ids could 
> be reused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9208) ConcurrentModificationException on SolrCore.close() resulting in abnormal CPU consumption

2016-06-14 Thread Lev Priima (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329478#comment-15329478
 ] 

Lev Priima edited comment on SOLR-9208 at 6/14/16 1:18 PM:
---

"thread-safe" init and modification example:

https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/SolrCore.java#L1272


was (Author: lpriima):
"Thread-safe" init:

https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/SolrCore.java#L1272

> ConcurrentModificationException on SolrCore.close() resulting in abnormal CPU 
> consumption
> -
>
> Key: SOLR-9208
> URL: https://issues.apache.org/jira/browse/SOLR-9208
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, Server
>Affects Versions: 6.0
>Reporter: Fabrizio Fortino
>
> In our use case we swap two cores and close the old one. We started seeing 
> the below error from time to time (it's completely random, we are unable to 
> reproduce it). Moreover we have noticed that when this Exception is thrown 
> the CPU consumption goes pretty high (80-100%).
> Error Message:
> java.util.ConcurrentModificationException: 
> java.util.ConcurrentModificationException
> StackTrace:
> java.util.ArrayList$Itr.checkForComodification (ArrayList.java:901)
> java.util.ArrayList$Itr.next (ArrayList.java:851)
> org.apache.solr.core.SolrCore.close (SolrCore.java:1134)
> org.apache.solr.servlet.HttpSolrCall.destroy (HttpSolrCall.java:513)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:242)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:184)
> …ipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:581)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:548)
> …g.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:226)
> …g.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1160)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:511)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1092)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> …e.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:213)
> ….eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:119)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:134)
> org.eclipse.jetty.server.Server.handle (Server.java:518)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:308)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:244)
> …pse.jetty.io.AbstractConnection$ReadCallback.succeeded 
> (AbstractConnection.java:273)
> org.eclipse.jetty.io.FillInterest.fillable (FillInterest.java:95)
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run 
> (SelectChannelEndPoint.java:93)
> …il.thread.strategy.ExecuteProduceConsume.produceAndRun 
> (ExecuteProduceConsume.java:246)
> …e.jetty.util.thread.strategy.ExecuteProduceConsume.run 
> (ExecuteProduceConsume.java:156)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:654)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:572)
> java.lang.Thread.run (Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9208) ConcurrentModificationException on SolrCore.close() resulting in abnormal CPU consumption

2016-06-14 Thread Lev Priima (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329478#comment-15329478
 ] 

Lev Priima commented on SOLR-9208:
--

"Thread-safe" init:

https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/SolrCore.java#L1272

> ConcurrentModificationException on SolrCore.close() resulting in abnormal CPU 
> consumption
> -
>
> Key: SOLR-9208
> URL: https://issues.apache.org/jira/browse/SOLR-9208
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, Server
>Affects Versions: 6.0
>Reporter: Fabrizio Fortino
>
> In our use case we swap two cores and close the old one. We started seeing 
> the below error from time to time (it's completely random, we are unable to 
> reproduce it). Moreover we have noticed that when this Exception is thrown 
> the CPU consumption goes pretty high (80-100%).
> Error Message:
> java.util.ConcurrentModificationException: 
> java.util.ConcurrentModificationException
> StackTrace:
> java.util.ArrayList$Itr.checkForComodification (ArrayList.java:901)
> java.util.ArrayList$Itr.next (ArrayList.java:851)
> org.apache.solr.core.SolrCore.close (SolrCore.java:1134)
> org.apache.solr.servlet.HttpSolrCall.destroy (HttpSolrCall.java:513)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:242)
> org.apache.solr.servlet.SolrDispatchFilter.doFilter 
> (SolrDispatchFilter.java:184)
> …ipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:581)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
> org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:548)
> …g.eclipse.jetty.server.session.SessionHandler.doHandle 
> (SessionHandler.java:226)
> …g.eclipse.jetty.server.handler.ContextHandler.doHandle 
> (ContextHandler.java:1160)
> org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:511)
> org.eclipse.jetty.server.session.SessionHandler.doScope 
> (SessionHandler.java:185)
> org.eclipse.jetty.server.handler.ContextHandler.doScope 
> (ContextHandler.java:1092)
> org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
> …e.jetty.server.handler.ContextHandlerCollection.handle 
> (ContextHandlerCollection.java:213)
> ….eclipse.jetty.server.handler.HandlerCollection.handle 
> (HandlerCollection.java:119)
> org.eclipse.jetty.server.handler.HandlerWrapper.handle 
> (HandlerWrapper.java:134)
> org.eclipse.jetty.server.Server.handle (Server.java:518)
> org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:308)
> org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:244)
> …pse.jetty.io.AbstractConnection$ReadCallback.succeeded 
> (AbstractConnection.java:273)
> org.eclipse.jetty.io.FillInterest.fillable (FillInterest.java:95)
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run 
> (SelectChannelEndPoint.java:93)
> …il.thread.strategy.ExecuteProduceConsume.produceAndRun 
> (ExecuteProduceConsume.java:246)
> …e.jetty.util.thread.strategy.ExecuteProduceConsume.run 
> (ExecuteProduceConsume.java:156)
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
> (QueuedThreadPool.java:654)
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run 
> (QueuedThreadPool.java:572)
> java.lang.Thread.run (Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8912) SolrJ UpdateRequest does not copy Basic Authentication Credentials

2016-06-14 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329459#comment-15329459
 ] 

Shawn Heisey commented on SOLR-8912:


This is a duplicate issue.  You need to look at the issue that it duplicates to 
see the fixed versions.

The duplicate links are in place.  If you visit this issue on the Jira website, 
the link is just under the description and above all the comments.

> SolrJ UpdateRequest does not copy Basic Authentication Credentials
> --
>
> Key: SOLR-8912
> URL: https://issues.apache.org/jira/browse/SOLR-8912
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 5.4.1
> Environment: all
>Reporter: harcor
>
> SolrJ UpdateRequest.java creates "new" instances of itself but does not copy 
> credentials.
> Solution is to add two lines of code to UpdateRequest.java in the getRoutes 
> method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7337) MultiTermQuery are sometimes rewritten into an empty boolean query

2016-06-14 Thread Ferenczi Jim (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329457#comment-15329457
 ] 

Ferenczi Jim commented on LUCENE-7337:
--

??A simple fix would be to replace the empty boolean query produced by the 
multi term query with a MatchNoDocsQuery but I am not sure that it's the best 
way to fix.??

I am not sure of this statement anymore. Conceptually a MatchNoDocsQuery and a 
BooleanQuery with no clause are similar. Though what I proposed assumed that 
the value for normalization of the MatchNoDocsQuery is 1. I think that doing 
this would bring confusion since this value is supposed to reflect the max 
score that the query can get (which is 0 in this case). Currently a boolean 
query or a disjunction query with no clause return 0 for the normalization. I 
think it's the expected behavior even though this breaks the distributed case 
as explained in my previous comment. 
For empty queries that are the result of an expansion (multi term query) maybe 
we could add yet another special query,  something like MatchNoExpansionQuery 
that would use a ConstantScoreWeight ? I am proposing this because this would 
make the distinction between a query that match no documents no matter what the 
context is and a query that match no documents because of the context (useful 
for the distributed case).

> MultiTermQuery are sometimes rewritten into an empty boolean query
> --
>
> Key: LUCENE-7337
> URL: https://issues.apache.org/jira/browse/LUCENE-7337
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
>
> MultiTermQuery are sometimes rewritten to an empty boolean query (depending 
> on the rewrite method), it can happen when no expansions are found on a fuzzy 
> query for instance.
> It can be problematic when the multi term query is boosted. 
> For instance consider the following query:
> `((title:bar~1)^100 text:bar)`
> This is a boolean query with two optional clauses. The first one is a fuzzy 
> query on the field title with a boost of 100. 
> If there is no expansion for "title:bar~1" the query is rewritten into:
> `(()^100 text:bar)`
> ... and when expansions are found:
> `((title:bars | title:bar)^100 text:bar)`
> The scoring of those two queries will differ because the normalization factor 
> and the norm for the first query will be equal to 1 (the boost is ignored 
> because the empty boolean query is not taken into account for the computation 
> of the normalization factor) whereas the second query will have a 
> normalization factor of 10,000 (100*100) and a norm equal to 0.01. 
> This kind of discrepancy can happen in a single index because the expansions 
> for the fuzzy query are done at the segment level. It can also happen when 
> multiple indices are requested (Solr/ElasticSearch case).
> A simple fix would be to replace the empty boolean query produced by the 
> multi term query with a MatchNoDocsQuery but I am not sure that it's the best 
> way to fix. WDYT ?
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9208) ConcurrentModificationException on SolrCore.close() resulting in abnormal CPU consumption

2016-06-14 Thread Fabrizio Fortino (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabrizio Fortino updated SOLR-9208:
---
Description: 
In our use case we swap two cores and close the old one. We started seeing the 
below error from time to time (it's completely random, we are unable to 
reproduce it). Moreover we have noticed that when this Exception is thrown the 
CPU consumption goes pretty high (80-100%).

Error Message:
java.util.ConcurrentModificationException: 
java.util.ConcurrentModificationException

StackTrace:
java.util.ArrayList$Itr.checkForComodification (ArrayList.java:901)
java.util.ArrayList$Itr.next (ArrayList.java:851)
org.apache.solr.core.SolrCore.close (SolrCore.java:1134)
org.apache.solr.servlet.HttpSolrCall.destroy (HttpSolrCall.java:513)
org.apache.solr.servlet.SolrDispatchFilter.doFilter 
(SolrDispatchFilter.java:242)
org.apache.solr.servlet.SolrDispatchFilter.doFilter 
(SolrDispatchFilter.java:184)
…ipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:581)
org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:548)
…g.eclipse.jetty.server.session.SessionHandler.doHandle 
(SessionHandler.java:226)
…g.eclipse.jetty.server.handler.ContextHandler.doHandle 
(ContextHandler.java:1160)
org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:511)
org.eclipse.jetty.server.session.SessionHandler.doScope 
(SessionHandler.java:185)
org.eclipse.jetty.server.handler.ContextHandler.doScope 
(ContextHandler.java:1092)
org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
…e.jetty.server.handler.ContextHandlerCollection.handle 
(ContextHandlerCollection.java:213)
….eclipse.jetty.server.handler.HandlerCollection.handle 
(HandlerCollection.java:119)
org.eclipse.jetty.server.handler.HandlerWrapper.handle (HandlerWrapper.java:134)
org.eclipse.jetty.server.Server.handle (Server.java:518)
org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:308)
org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:244)
…pse.jetty.io.AbstractConnection$ReadCallback.succeeded 
(AbstractConnection.java:273)
org.eclipse.jetty.io.FillInterest.fillable (FillInterest.java:95)
org.eclipse.jetty.io.SelectChannelEndPoint$2.run (SelectChannelEndPoint.java:93)
…il.thread.strategy.ExecuteProduceConsume.produceAndRun 
(ExecuteProduceConsume.java:246)
…e.jetty.util.thread.strategy.ExecuteProduceConsume.run 
(ExecuteProduceConsume.java:156)
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
(QueuedThreadPool.java:654)
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run (QueuedThreadPool.java:572)
java.lang.Thread.run (Thread.java:745)

  was:
In out use case we swap two cores and close the old one. We started seeing the 
below error from time to time (it's completely random, we are unable to 
reproduce it). Moreover we have noticed that when this Exception is thrown the 
CPU consumption goes pretty high (80-100%).

Error Message:
java.util.ConcurrentModificationException: 
java.util.ConcurrentModificationException

StackTrace:
java.util.ArrayList$Itr.checkForComodification (ArrayList.java:901)
java.util.ArrayList$Itr.next (ArrayList.java:851)
org.apache.solr.core.SolrCore.close (SolrCore.java:1134)
org.apache.solr.servlet.HttpSolrCall.destroy (HttpSolrCall.java:513)
org.apache.solr.servlet.SolrDispatchFilter.doFilter 
(SolrDispatchFilter.java:242)
org.apache.solr.servlet.SolrDispatchFilter.doFilter 
(SolrDispatchFilter.java:184)
…ipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:581)
org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:548)
…g.eclipse.jetty.server.session.SessionHandler.doHandle 
(SessionHandler.java:226)
…g.eclipse.jetty.server.handler.ContextHandler.doHandle 
(ContextHandler.java:1160)
org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:511)
org.eclipse.jetty.server.session.SessionHandler.doScope 
(SessionHandler.java:185)
org.eclipse.jetty.server.handler.ContextHandler.doScope 
(ContextHandler.java:1092)
org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
…e.jetty.server.handler.ContextHandlerCollection.handle 
(ContextHandlerCollection.java:213)
….eclipse.jetty.server.handler.HandlerCollection.handle 
(HandlerCollection.java:119)
org.eclipse.jetty.server.handler.HandlerWrapper.handle (HandlerWrapper.java:134)
org.eclipse.jetty.server.Server.handle (Server.java:518)
org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:308)
org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:244)

[jira] [Created] (SOLR-9208) ConcurrentModificationException on SolrCore.close() resulting in abnormal CPU consumption

2016-06-14 Thread Fabrizio Fortino (JIRA)
Fabrizio Fortino created SOLR-9208:
--

 Summary: ConcurrentModificationException on SolrCore.close() 
resulting in abnormal CPU consumption
 Key: SOLR-9208
 URL: https://issues.apache.org/jira/browse/SOLR-9208
 Project: Solr
  Issue Type: Bug
  Components: multicore, Server
Affects Versions: 6.0
Reporter: Fabrizio Fortino


In out use case we swap two cores and close the old one. We started seeing the 
below error from time to time (it's completely random, we are unable to 
reproduce it). Moreover we have noticed that when this Exception is thrown the 
CPU consumption goes pretty high (80-100%).

Error Message:
java.util.ConcurrentModificationException: 
java.util.ConcurrentModificationException

StackTrace:
java.util.ArrayList$Itr.checkForComodification (ArrayList.java:901)
java.util.ArrayList$Itr.next (ArrayList.java:851)
org.apache.solr.core.SolrCore.close (SolrCore.java:1134)
org.apache.solr.servlet.HttpSolrCall.destroy (HttpSolrCall.java:513)
org.apache.solr.servlet.SolrDispatchFilter.doFilter 
(SolrDispatchFilter.java:242)
org.apache.solr.servlet.SolrDispatchFilter.doFilter 
(SolrDispatchFilter.java:184)
…ipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
org.eclipse.jetty.servlet.ServletHandler.doHandle (ServletHandler.java:581)
org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:143)
org.eclipse.jetty.security.SecurityHandler.handle (SecurityHandler.java:548)
…g.eclipse.jetty.server.session.SessionHandler.doHandle 
(SessionHandler.java:226)
…g.eclipse.jetty.server.handler.ContextHandler.doHandle 
(ContextHandler.java:1160)
org.eclipse.jetty.servlet.ServletHandler.doScope (ServletHandler.java:511)
org.eclipse.jetty.server.session.SessionHandler.doScope 
(SessionHandler.java:185)
org.eclipse.jetty.server.handler.ContextHandler.doScope 
(ContextHandler.java:1092)
org.eclipse.jetty.server.handler.ScopedHandler.handle (ScopedHandler.java:141)
…e.jetty.server.handler.ContextHandlerCollection.handle 
(ContextHandlerCollection.java:213)
….eclipse.jetty.server.handler.HandlerCollection.handle 
(HandlerCollection.java:119)
org.eclipse.jetty.server.handler.HandlerWrapper.handle (HandlerWrapper.java:134)
org.eclipse.jetty.server.Server.handle (Server.java:518)
org.eclipse.jetty.server.HttpChannel.handle (HttpChannel.java:308)
org.eclipse.jetty.server.HttpConnection.onFillable (HttpConnection.java:244)
…pse.jetty.io.AbstractConnection$ReadCallback.succeeded 
(AbstractConnection.java:273)
org.eclipse.jetty.io.FillInterest.fillable (FillInterest.java:95)
org.eclipse.jetty.io.SelectChannelEndPoint$2.run (SelectChannelEndPoint.java:93)
…il.thread.strategy.ExecuteProduceConsume.produceAndRun 
(ExecuteProduceConsume.java:246)
…e.jetty.util.thread.strategy.ExecuteProduceConsume.run 
(ExecuteProduceConsume.java:156)
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob 
(QueuedThreadPool.java:654)
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run (QueuedThreadPool.java:572)
java.lang.Thread.run (Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1213 - Still Failing

2016-06-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1213/

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testCommitWithin

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([E430797AF7E409C:B49168EF2C50AE89]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:780)
at 
org.apache.solr.update.AutoCommitTest.testCommitWithin(AutoCommitTest.java:325)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:773)
... 40 more




Build Log:
[...truncated 11592 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (LUCENE-6171) Make lucene completely write-once

2016-06-14 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329360#comment-15329360
 ] 

Robert Muir commented on LUCENE-6171:
-

+1. This part of the change speaks volumes:

{noformat}
 public FSIndexOutput(String name) throws IOException {
-  this(name, StandardOpenOption.CREATE, 
StandardOpenOption.TRUNCATE_EXISTING, StandardOpenOption.WRITE);
+  this(name, StandardOpenOption.WRITE, StandardOpenOption.CREATE_NEW);
 }
{noformat}

Some of the issues i mentioned may have been resolved already? This issue is 
quite old. I think many of these problems have been addressed since then.


> Make lucene completely write-once
> -
>
> Key: LUCENE-6171
> URL: https://issues.apache.org/jira/browse/LUCENE-6171
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Michael McCandless
> Attachments: LUCENE-6171.patch
>
>
> Today, lucene is mostly write-once, but not always, and these are just very 
> exceptional cases. 
> This is an invitation for exceptional bugs: (and we have occasional test 
> failures when doing "no-wait close" because of this). 
> I would prefer it if we didn't try to delete files before we open them for 
> write, and if we opened them with the CREATE_NEW option by default to throw 
> an exception, if the file already exists.
> The trickier parts of the change are going to be IndexFileDeleter and 
> exceptions on merge / CFS construction logic.
> Overall for IndexFileDeleter I think the least invasive option might be to 
> only delete files older than the current commit point? This will ensure that 
> inflateGens() always avoids trying to overwrite any files that were from an 
> aborted segment. 
> For CFS construction/exceptions on merge, we really need to remove the custom 
> "sniping" of index files there and let only IndexFileDeleter delete files. My 
> previous failed approach involved always consistently using 
> TrackingDirectoryWrapper, but it failed, and only in backwards compatibility 
> tests, because of LUCENE-6146 (but i could never figure that out). I am 
> hoping this time I will be successful :)
> Longer term we should think about more simplifications, progress has been 
> made on LUCENE-5987, but I think overall we still try to be a superhero for 
> exceptions on merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.1.0 RC1

2016-06-14 Thread Martijn v Groningen
+1 SUCCESS! [0:40:22.702419]

On 14 June 2016 at 02:39, Steve Rowe  wrote:

> I’ve committed fixes for all three problems.
>
> --
> Steve
> www.lucidworks.com
>
> > On Jun 13, 2016, at 2:46 PM, Steve Rowe  wrote:
> >
> > Smoke tester was happy: SUCCESS! [0:23:40.900240]
> >
> > Except for the below-described minor issues: changes, docs and javadocs
> look good:
> >
> > * Broken description section links from documentation to javadocs <
> https://issues.apache.org/jira/browse/LUCENE-7338>
> > * Solr’s CHANGES.txt is missing a “Versions of Major Components” section.
> > * Solr’s Changes.html has a section "Upgrading from Solr any prior
> release” that is not formatted properly (the hyphens are put into a bullet
> item below)
> >
> > +0 to release.  I’ll work on the above and backport to the 6.1 branch,
> in case there is another RC.
> >
> > --
> > Steve
> > www.lucidworks.com
> >
> >> On Jun 13, 2016, at 5:15 AM, Adrien Grand  wrote:
> >>
> >> Please vote for release candidate 1 for Lucene/Solr 6.1.0
> >>
> >>
> >> The artifacts can be downloaded from:
> >>
> >>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
> >>
> >> You can run the smoke tester directly with this command:
> >>
> >>
> >> python3 -u dev-tools/scripts/smokeTestRelease.py \
> >>
> >>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
> >> Here is my +1.
> >> SUCCESS! [0:36:57.750669]
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Met vriendelijke groet,

Martijn van Groningen


[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-14 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329338#comment-15329338
 ] 

Mark Miller commented on SOLR-7374:
---

Thanks, I'm pretty much ready with this.

One comment: Shouldn't we make BackupRepository an abstract class rather than 
interface? We can only add default methods in Java 8, but branch6x is still 
Java 7 right? Backcompat will be easier with abstract for now I think.

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7113) Multiple calls to UpdateLog#init is not thread safe with respect to the HDFS FileSystem client object usage.

2016-06-14 Thread Matthew Byng-Maddick (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329314#comment-15329314
 ] 

Matthew Byng-Maddick commented on SOLR-7113:


I'm very confused about this. We're seeing that tlogs get held open (and in 
particular hold open datanode transceivers) on HDFS Solr:

Using the github version of the commit (because I know how to link to it): 
https://github.com/apache/lucene-solr/commit/f2c9067e59b81b3dea7903315431babcd2506167#diff-c796f1f2f2f362c18bd89a85688fbebfR295
 we see the following lines:
{code}
tlog = ntlog

if (tlog != ntlog) {
{code}

When is that if condition ever not true? What was this if condition supposed to 
do? This does appear one part of a reasonable explanation as to why the old 
rotated tlogs are being held open by the solr HDFS client.

> Multiple calls to UpdateLog#init is not thread safe with respect to the HDFS 
> FileSystem client object usage.
> 
>
> Key: SOLR-7113
> URL: https://issues.apache.org/jira/browse/SOLR-7113
> Project: Solr
>  Issue Type: Bug
>Reporter: Vamsee Yarlagadda
>Assignee: Mark Miller
> Fix For: 5.1, 6.0
>
> Attachments: SOLR-7113.patch
>
>
> I notice this issue while trying to do some heavy indexing into Solr. (700K 
> docs  per minute)
> Solr log errors
> {code}
> 15:42:47
> ERROR
> HdfsTransactionLog
> Exception closing tlog.
> java.io.IOException: Filesystem closed
>   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:765)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:1898)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1859)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
>   at 
> org.apache.solr.update.HdfsTransactionLog.close(HdfsTransactionLog.java:303)
>   at org.apache.solr.update.TransactionLog.decref(TransactionLog.java:504)
>   at org.apache.solr.update.UpdateLog.addOldLog(UpdateLog.java:335)
>   at org.apache.solr.update.UpdateLog.postCommit(UpdateLog.java:628)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:600)
>   at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> 15:42:47
> ERROR
> CommitTracker
> auto commit error...:org.apache.solr.common.SolrException: 
> java.io.IOException: Filesystem closed
> auto commit error...:org.apache.solr.common.SolrException: 
> java.io.IOException: Filesystem closed
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9179) Error Initializing Schema

2016-06-14 Thread Moritz Becker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329304#comment-15329304
 ] 

Moritz Becker edited comment on SOLR-9179 at 6/14/16 10:57 AM:
---

I am facing the same issue on the latest J9, switched from Solr 5.3.1 to Sorl 
6.0.1.

IBM J9 VM (build 2.8, JRE 1.8.0 AIX ppc64-64 Compressed References 
20160427_301573 (JIT enabled, AOT enabled)
J9VM - R28_Java8_SR3_20160427_1620_B301573
JIT  - tr.r14.java.green_20160329_114288
GC   - R28_Java8_SR3_20160427_1620_B301573_CMPRSS
J9CL - 20160427_301573)
JCL - 20160421_01 based on Oracle jdk8u91-b14

I am using the out-of-the-box solr installation.



was (Author: mobe):
I am facing the same issue on the latest J9, switched from Solr 5.3.1 to Sorl 
6.0.1.

IBM J9 VM (build 2.8, JRE 1.8.0 AIX ppc64-64 Compressed References 
20160427_301573 (JIT enabled, AOT enabled)
J9VM - R28_Java8_SR3_20160427_1620_B301573
JIT  - tr.r14.java.green_20160329_114288
GC   - R28_Java8_SR3_20160427_1620_B301573_CMPRSS
J9CL - 20160427_301573)
JCL - 20160421_01 based on Oracle jdk8u91-b14


> Error Initializing Schema 
> --
>
> Key: SOLR-9179
> URL: https://issues.apache.org/jira/browse/SOLR-9179
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 6.0.1
> Environment: SOLR 6.0.1 running under Tomcat 8.0.35 on IBM System I
>Reporter: Andrew Bennison
> Attachments: solr.log
>
>
> After upgrade from 6.0.0 to 6.0.1 am getting Schema Initialization Errors.
> If I switch from ClassicSchema to Managed the Core will load first time, 
> however subsequent loads will fail.
> Error received is :
> org.apache.solr.common.SolrException: java.lang.ExceptionInInitializerError
>   at org.apache.solr.core.SolrCore.(SolrCore.java:771)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:642)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:817)
>   at org.apache.solr.core.CoreContainer.access$000(CoreContainer.java:88)
>   at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:468)
>   at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:459)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:277)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$1.948BC950.run(Unknown
>  Source)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.lang.Thread.run(Thread.java:785)
> Caused by: java.lang.BootstrapMethodError: 
> java.lang.ExceptionInInitializerError
>   at 
> org.apache.solr.schema.IndexSchema$SchemaProps$Handler.(IndexSchema.java:1392)
>   at org.apache.solr.handler.SchemaHandler.(SchemaHandler.java:62)
>   at java.lang.Class.forNameImpl(Native Method)
>   at java.lang.Class.forName(Class.java:343)
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:530)
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:467)
>   at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:565)
>   at org.apache.solr.core.PluginBag.createPlugin(PluginBag.java:121)
>   at org.apache.solr.core.PluginBag.init(PluginBag.java:221)
>   at 
> org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:130)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:727)
>   ... 11 more
> Caused by: java.lang.ExceptionInInitializerError
>   at java.lang.J9VMInternals.ensureError(J9VMInternals.java:137)
>   at 
> java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:126)
>   at java.lang.Class.forNameImpl(Native Method)
>   at java.lang.Class.forName(Class.java:343)
>   at 
> java.lang.invoke.MethodType.nonPrimitiveClassFromString(MethodType.java:311)
>   at java.lang.invoke.MethodType.parseIntoClasses(MethodType.java:373)
>   at 
> java.lang.invoke.MethodType.fromMethodDescriptorString(MethodType.java:286)
>   at 
> java.lang.invoke.MethodHandle.sendResolveMethodHandle(MethodHandle.java:961)
>   at java.lang.invoke.MethodHandle.getCPMethodHandleAt(Native Method)
>   at 
> java.lang.invoke.MethodHandle.resolveInvokeDynamic(MethodHandle.java:852)
>   ... 22 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.solr.schema.IndexSchema$SchemaProps$Handler.values(IndexSchema.java:1391)
>   at 
> org.apache.solr.schema.IndexSchema$SchemaProps.(IndexSchema.java:1503)
>   ... 30 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SOLR-9179) Error Initializing Schema

2016-06-14 Thread Moritz Becker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329304#comment-15329304
 ] 

Moritz Becker commented on SOLR-9179:
-

I am facing the same issue on the latest J9, switched from Solr 5.3.1 to Sorl 
6.0.1.

IBM J9 VM (build 2.8, JRE 1.8.0 AIX ppc64-64 Compressed References 
20160427_301573 (JIT enabled, AOT enabled)
J9VM - R28_Java8_SR3_20160427_1620_B301573
JIT  - tr.r14.java.green_20160329_114288
GC   - R28_Java8_SR3_20160427_1620_B301573_CMPRSS
J9CL - 20160427_301573)
JCL - 20160421_01 based on Oracle jdk8u91-b14


> Error Initializing Schema 
> --
>
> Key: SOLR-9179
> URL: https://issues.apache.org/jira/browse/SOLR-9179
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 6.0.1
> Environment: SOLR 6.0.1 running under Tomcat 8.0.35 on IBM System I
>Reporter: Andrew Bennison
> Attachments: solr.log
>
>
> After upgrade from 6.0.0 to 6.0.1 am getting Schema Initialization Errors.
> If I switch from ClassicSchema to Managed the Core will load first time, 
> however subsequent loads will fail.
> Error received is :
> org.apache.solr.common.SolrException: java.lang.ExceptionInInitializerError
>   at org.apache.solr.core.SolrCore.(SolrCore.java:771)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:642)
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:817)
>   at org.apache.solr.core.CoreContainer.access$000(CoreContainer.java:88)
>   at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:468)
>   at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:459)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:277)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$1.948BC950.run(Unknown
>  Source)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.lang.Thread.run(Thread.java:785)
> Caused by: java.lang.BootstrapMethodError: 
> java.lang.ExceptionInInitializerError
>   at 
> org.apache.solr.schema.IndexSchema$SchemaProps$Handler.(IndexSchema.java:1392)
>   at org.apache.solr.handler.SchemaHandler.(SchemaHandler.java:62)
>   at java.lang.Class.forNameImpl(Native Method)
>   at java.lang.Class.forName(Class.java:343)
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:530)
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:467)
>   at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:565)
>   at org.apache.solr.core.PluginBag.createPlugin(PluginBag.java:121)
>   at org.apache.solr.core.PluginBag.init(PluginBag.java:221)
>   at 
> org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:130)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:727)
>   ... 11 more
> Caused by: java.lang.ExceptionInInitializerError
>   at java.lang.J9VMInternals.ensureError(J9VMInternals.java:137)
>   at 
> java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:126)
>   at java.lang.Class.forNameImpl(Native Method)
>   at java.lang.Class.forName(Class.java:343)
>   at 
> java.lang.invoke.MethodType.nonPrimitiveClassFromString(MethodType.java:311)
>   at java.lang.invoke.MethodType.parseIntoClasses(MethodType.java:373)
>   at 
> java.lang.invoke.MethodType.fromMethodDescriptorString(MethodType.java:286)
>   at 
> java.lang.invoke.MethodHandle.sendResolveMethodHandle(MethodHandle.java:961)
>   at java.lang.invoke.MethodHandle.getCPMethodHandleAt(Native Method)
>   at 
> java.lang.invoke.MethodHandle.resolveInvokeDynamic(MethodHandle.java:852)
>   ... 22 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.solr.schema.IndexSchema$SchemaProps$Handler.values(IndexSchema.java:1391)
>   at 
> org.apache.solr.schema.IndexSchema$SchemaProps.(IndexSchema.java:1503)
>   ... 30 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_92) - Build # 16985 - Still Failing!

2016-06-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16985/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.RecoveryZkTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([BAB19D169247672C]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.RecoveryZkTest

Error Message:
Captured an uncaught exception in thread: Thread[id=6721, 
name=OverseerCollectionConfigSetProcessor-96068584347074565-127.0.0.1:45013_-n_00,
 state=RUNNABLE, group=Overseer collection creation process.]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6721, 
name=OverseerCollectionConfigSetProcessor-96068584347074565-127.0.0.1:45013_-n_00,
 state=RUNNABLE, group=Overseer collection creation process.]
Caused by: java.lang.OutOfMemoryError: Java heap space


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.RecoveryZkTest

Error Message:
Captured an uncaught exception in thread: Thread[id=6804, name=Connection 
evictor, state=RUNNABLE, group=TGRP-RecoveryZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=6804, name=Connection evictor, state=RUNNABLE, 
group=TGRP-RecoveryZkTest]
Caused by: java.lang.OutOfMemoryError: Java heap space


FAILED:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([BAB19D169247672C]:0)




Build Log:
[...truncated 12475 lines...]
   [junit4] Suite: org.apache.solr.cloud.RecoveryZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.RecoveryZkTest_BAB19D169247672C-001/init-core-data-001
   [junit4]   2> 792914 INFO  
(SUITE-RecoveryZkTest-seed#[BAB19D169247672C]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 792915 INFO  
(SUITE-RecoveryZkTest-seed#[BAB19D169247672C]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 792916 INFO  
(TEST-RecoveryZkTest.test-seed#[BAB19D169247672C]) [] o.a.s.c.ZkTestServer 
STARTING ZK TEST SERVER
   [junit4]   2> 792916 INFO  (Thread-2050) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 792916 INFO  (Thread-2050) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 793016 INFO  
(TEST-RecoveryZkTest.test-seed#[BAB19D169247672C]) [] o.a.s.c.ZkTestServer 
start zk server on port:45075
   [junit4]   2> 793017 INFO  
(TEST-RecoveryZkTest.test-seed#[BAB19D169247672C]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 793017 INFO  
(TEST-RecoveryZkTest.test-seed#[BAB19D169247672C]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 793018 INFO  (zkCallback-1311-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@6df9142c 
name:ZooKeeperConnection Watcher:127.0.0.1:45075 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 793018 INFO  
(TEST-RecoveryZkTest.test-seed#[BAB19D169247672C]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 793018 INFO  
(TEST-RecoveryZkTest.test-seed#[BAB19D169247672C]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 793018 INFO  
(TEST-RecoveryZkTest.test-seed#[BAB19D169247672C]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 793019 INFO  
(TEST-RecoveryZkTest.test-seed#[BAB19D169247672C]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 793019 INFO  
(TEST-RecoveryZkTest.test-seed#[BAB19D169247672C]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 793020 INFO  (zkCallback-1312-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@37f186df 
name:ZooKeeperConnection Watcher:127.0.0.1:45075/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 793020 INFO  
(TEST-RecoveryZkTest.test-seed#[BAB19D169247672C]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 793021 INFO  
(TEST-RecoveryZkTest.test-seed#[BAB19D169247672C]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 793021 INFO  
(TEST-RecoveryZkTest.test-seed#[BAB19D169247672C]) [] 

[jira] [Commented] (LUCENE-6968) LSH Filter

2016-06-14 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329206#comment-15329206
 ] 

Tommaso Teofili commented on LUCENE-6968:
-

I've committed this, thanks to [~andyhind] and [~caomanhdat] for your patches!

> LSH Filter
> --
>
> Key: LUCENE-6968
> URL: https://issues.apache.org/jira/browse/LUCENE-6968
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Cao Manh Dat
>Assignee: Tommaso Teofili
> Fix For: master (7.0)
>
> Attachments: LUCENE-6968.4.patch, LUCENE-6968.5.patch, 
> LUCENE-6968.6.patch, LUCENE-6968.patch, LUCENE-6968.patch, LUCENE-6968.patch
>
>
> I'm planning to implement LSH. Which support query like this
> {quote}
> Find similar documents that have 0.8 or higher similar score with a given 
> document. Similarity measurement can be cosine, jaccard, euclid..
> {quote}
> For example. Given following corpus
> {quote}
> 1. Solr is an open source search engine based on Lucene
> 2. Solr is an open source enterprise search engine based on Lucene
> 3. Solr is an popular open source enterprise search engine based on Lucene
> 4. Apache Lucene is a high-performance, full-featured text search engine 
> library written entirely in Java
> {quote}
> We wanna find documents that have 0.6 score in jaccard measurement with this 
> doc
> {quote}
> Solr is an open source search engine
> {quote}
> It will return only docs 1,2 and 3 (MoreLikeThis will also return doc 4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6968) LSH Filter

2016-06-14 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved LUCENE-6968.
-
Resolution: Fixed

> LSH Filter
> --
>
> Key: LUCENE-6968
> URL: https://issues.apache.org/jira/browse/LUCENE-6968
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Cao Manh Dat
>Assignee: Tommaso Teofili
> Fix For: master (7.0)
>
> Attachments: LUCENE-6968.4.patch, LUCENE-6968.5.patch, 
> LUCENE-6968.6.patch, LUCENE-6968.patch, LUCENE-6968.patch, LUCENE-6968.patch
>
>
> I'm planning to implement LSH. Which support query like this
> {quote}
> Find similar documents that have 0.8 or higher similar score with a given 
> document. Similarity measurement can be cosine, jaccard, euclid..
> {quote}
> For example. Given following corpus
> {quote}
> 1. Solr is an open source search engine based on Lucene
> 2. Solr is an open source enterprise search engine based on Lucene
> 3. Solr is an popular open source enterprise search engine based on Lucene
> 4. Apache Lucene is a high-performance, full-featured text search engine 
> library written entirely in Java
> {quote}
> We wanna find documents that have 0.6 score in jaccard measurement with this 
> doc
> {quote}
> Solr is an open source search engine
> {quote}
> It will return only docs 1,2 and 3 (MoreLikeThis will also return doc 4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6968) LSH Filter

2016-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329205#comment-15329205
 ] 

ASF subversion and git services commented on LUCENE-6968:
-

Commit 82a9244193ba948142b834ec08e2de0d98cfba9f in lucene-solr's branch 
refs/heads/master from [~teofili]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=82a9244 ]

LUCENE-6968 - MinHash filter, thanks to Andy Hind and Cao Manh Dat for patches


> LSH Filter
> --
>
> Key: LUCENE-6968
> URL: https://issues.apache.org/jira/browse/LUCENE-6968
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Cao Manh Dat
>Assignee: Tommaso Teofili
> Fix For: master (7.0)
>
> Attachments: LUCENE-6968.4.patch, LUCENE-6968.5.patch, 
> LUCENE-6968.6.patch, LUCENE-6968.patch, LUCENE-6968.patch, LUCENE-6968.patch
>
>
> I'm planning to implement LSH. Which support query like this
> {quote}
> Find similar documents that have 0.8 or higher similar score with a given 
> document. Similarity measurement can be cosine, jaccard, euclid..
> {quote}
> For example. Given following corpus
> {quote}
> 1. Solr is an open source search engine based on Lucene
> 2. Solr is an open source enterprise search engine based on Lucene
> 3. Solr is an popular open source enterprise search engine based on Lucene
> 4. Apache Lucene is a high-performance, full-featured text search engine 
> library written entirely in Java
> {quote}
> We wanna find documents that have 0.6 score in jaccard measurement with this 
> doc
> {quote}
> Solr is an open source search engine
> {quote}
> It will return only docs 1,2 and 3 (MoreLikeThis will also return doc 4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6968) LSH Filter

2016-06-14 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-6968:

Fix Version/s: master (7.0)

> LSH Filter
> --
>
> Key: LUCENE-6968
> URL: https://issues.apache.org/jira/browse/LUCENE-6968
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Cao Manh Dat
>Assignee: Tommaso Teofili
> Fix For: master (7.0)
>
> Attachments: LUCENE-6968.4.patch, LUCENE-6968.5.patch, 
> LUCENE-6968.6.patch, LUCENE-6968.patch, LUCENE-6968.patch, LUCENE-6968.patch
>
>
> I'm planning to implement LSH. Which support query like this
> {quote}
> Find similar documents that have 0.8 or higher similar score with a given 
> document. Similarity measurement can be cosine, jaccard, euclid..
> {quote}
> For example. Given following corpus
> {quote}
> 1. Solr is an open source search engine based on Lucene
> 2. Solr is an open source enterprise search engine based on Lucene
> 3. Solr is an popular open source enterprise search engine based on Lucene
> 4. Apache Lucene is a high-performance, full-featured text search engine 
> library written entirely in Java
> {quote}
> We wanna find documents that have 0.6 score in jaccard measurement with this 
> doc
> {quote}
> Solr is an open source search engine
> {quote}
> It will return only docs 1,2 and 3 (MoreLikeThis will also return doc 4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6968) LSH Filter

2016-06-14 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-6968:

Component/s: modules/analysis

> LSH Filter
> --
>
> Key: LUCENE-6968
> URL: https://issues.apache.org/jira/browse/LUCENE-6968
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Cao Manh Dat
>Assignee: Tommaso Teofili
> Attachments: LUCENE-6968.4.patch, LUCENE-6968.5.patch, 
> LUCENE-6968.6.patch, LUCENE-6968.patch, LUCENE-6968.patch, LUCENE-6968.patch
>
>
> I'm planning to implement LSH. Which support query like this
> {quote}
> Find similar documents that have 0.8 or higher similar score with a given 
> document. Similarity measurement can be cosine, jaccard, euclid..
> {quote}
> For example. Given following corpus
> {quote}
> 1. Solr is an open source search engine based on Lucene
> 2. Solr is an open source enterprise search engine based on Lucene
> 3. Solr is an popular open source enterprise search engine based on Lucene
> 4. Apache Lucene is a high-performance, full-featured text search engine 
> library written entirely in Java
> {quote}
> We wanna find documents that have 0.6 score in jaccard measurement with this 
> doc
> {quote}
> Solr is an open source search engine
> {quote}
> It will return only docs 1,2 and 3 (MoreLikeThis will also return doc 4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9161) SolrPluginUtils.invokeSetters should accommodate setter variants

2016-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329192#comment-15329192
 ] 

ASF subversion and git services commented on SOLR-9161:
---

Commit 9be5b98eb3ca85b7597f96dc9a42551fe3051d4d in lucene-solr's branch 
refs/heads/branch_6x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9be5b98 ]

SOLR-9161: change SolrPluginUtils.invokeSetters implementation to accommodate 
setter variants


> SolrPluginUtils.invokeSetters should accommodate setter variants
> 
>
> Key: SOLR-9161
> URL: https://issues.apache.org/jira/browse/SOLR-9161
> Project: Solr
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9161.patch, SOLR-9161.patch
>
>
> The code currently assumes that there is only one setter (or if there are 
> several setters then the first one found is used and it could mismatch on the 
> arg type).
> Context and motivation is that a class with a
> {code}
> void setAFloat(float val) {
>   this.val = val;
> }
> {code}
> setter may wish to also provide a
> {code}
> void setAFloat(String val) {
>   this.val = Float.parseFloat(val);
> }
> {code}
> convenience setter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 271 - Failure

2016-06-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/271/

All tests passed

Build Log:
[...truncated 63246 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj1561999643
 [ecj-lint] Compiling 932 source files to /tmp/ecj1561999643
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 34)
 [ecj-lint] import org.apache.hadoop.fs.FsStatus;
 [ecj-lint]^
 [ecj-lint] The import org.apache.hadoop.fs.FsStatus is never used
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 227)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 144)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
 (at line 79)
 [ecj-lint] import org.apache.solr.core.DirectoryFactory;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.core.DirectoryFactory is never used
 [ecj-lint] --
 [ecj-lint] 12. 

Re: [JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_92) - Build # 5906 - Still Failing!

2016-06-14 Thread Michael McCandless
I pushed a fix ... this was due to a concurrency bug with LUCENE-7302 where
the last indexed sequence number (as reported by IW) could increment before
an NRT reader refresh would see the change, and this made
ControlledRealTimeReopenThread angry.

Mike McCandless

http://blog.mikemccandless.com

On Sun, Jun 12, 2016 at 10:15 AM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> I'll dig.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Sun, Jun 12, 2016 at 8:25 AM, Policeman Jenkins Server <
> jenk...@thetaphi.de> wrote:
>
>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5906/
>> Java: 32bit/jdk1.8.0_92 -server -XX:+UseG1GC
>>
>> 2 tests failed.
>> FAILED:
>> org.apache.lucene.search.TestControlledRealTimeReopenThread.testControlledRealTimeReopenThread
>>
>> Error Message:
>>
>>
>> Stack Trace:
>> java.lang.AssertionError
>> at
>> __randomizedtesting.SeedInfo.seed([E94344495F00D8B7:16AA558AAC03F0BB]:0)
>> at org.junit.Assert.fail(Assert.java:92)
>> at org.junit.Assert.assertTrue(Assert.java:43)
>> at org.junit.Assert.assertFalse(Assert.java:68)
>> at org.junit.Assert.assertFalse(Assert.java:79)
>> at
>> org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.runTest(ThreadedIndexingAndSearchingTestCase.java:629)
>> at
>> org.apache.lucene.search.TestControlledRealTimeReopenThread.testControlledRealTimeReopenThread(TestControlledRealTimeReopenThread.java:68)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
>> at
>> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>> at
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>> at
>> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>> at
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>> at
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>> at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>> at
>> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
>> at
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
>> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
>> at
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>> at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at
>> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>> at
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>> at
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>> at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>> at
>> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>> at
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>> at
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>> at
>> 

[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 206 - Failure!

2016-06-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/206/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 53804 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/build.xml:740: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/build.xml:101: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build.xml:138: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build.xml:480: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/common-build.xml:2496: 
Can't get https://issues.apache.org/jira/rest/api/2/project/LUCENE to 
/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build/docs/changes/jiraVersionList.json

Total time: 99 minutes 43 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Lucene/Solr 6.1.0

2016-06-14 Thread Jan Høydahl
>  - https://wiki.apache.org/solr/ReleaseNote61 
> 
The Solr lead-text in the announcement says:

> Solr is the popular, blazing fast, open source NoSQL search platform from the 
> Apache Lucene project. Its major features include powerful full-text search, 
> hit highlighting, faceted search, dynamic clustering, database integration, 
> rich document (e.g., Word, PDF) handling, and geospatial search. Solr is 
> highly scalable, providing fault tolerant distributed search and indexing, 
> and powers the search and navigation features of many of the world's largest 
> internet sites.

It may be worth to consider flagging some of the newer features such as 
ParallellSQL, JDBC, CDCR or Security -- perhaps in place of some more obvious 
feature like clustering or highlighting?

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

[jira] [Commented] (SOLR-9204) Improve performance of getting directory size with hdfs.

2016-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329104#comment-15329104
 ] 

ASF subversion and git services commented on SOLR-9204:
---

Commit bd7ddb8fbfedd29711c8f5e466022ecb3810b70a in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bd7ddb8 ]

SOLR-9204: Remove unused imports.


> Improve performance of getting directory size with hdfs.
> 
>
> Key: SOLR-9204
> URL: https://issues.apache.org/jira/browse/SOLR-9204
> Project: Solr
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master (7.0), 6.2
>
> Attachments: SOLR-9204.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9204) Improve performance of getting directory size with hdfs.

2016-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329102#comment-15329102
 ] 

ASF subversion and git services commented on SOLR-9204:
---

Commit 9719105e7cdd082bbd013145b4f37a2f67ebfd11 in lucene-solr's branch 
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9719105 ]

SOLR-9204: Remove unused imports.


> Improve performance of getting directory size with hdfs.
> 
>
> Key: SOLR-9204
> URL: https://issues.apache.org/jira/browse/SOLR-9204
> Project: Solr
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master (7.0), 6.2
>
> Attachments: SOLR-9204.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7302) IndexWriter should tell you the order of indexing operations

2016-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329079#comment-15329079
 ] 

ASF subversion and git services commented on LUCENE-7302:
-

Commit 8ed16fd1f9a03c66d4ac81ddaa7ab70359410b95 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8ed16fd ]

LUCENE-7302: ensure IW.getMaxCompletedSequenceNumber only reflects a change 
after NRT reader refresh would also see it


> IndexWriter should tell you the order of indexing operations
> 
>
> Key: LUCENE-7302
> URL: https://issues.apache.org/jira/browse/LUCENE-7302
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7032.patch, LUCENE-7132.patch
>
>
> Today, when you use multiple threads to concurrently index, Lucene
> knows the effective order that those operations were applied to the
> index, but doesn't return that information back to you.
> But this is important to know, if you want to build a reliable search
> API on top of Lucene.  Combined with the recently added NRT
> replication (LUCENE-5438) it can be a strong basis for an efficient
> distributed search API.
> I think we should return this information, since we already have it,
> and since it could simplify servers (ES/Solr) on top of Lucene:
>   - They would not require locking preventing the same id from being
> indexed concurrently since they could instead check the returned
> sequence number to know which update "won", for features like
> "realtime get".  (Locking is probably still needed for features
> like optimistic concurrency).
>   - When re-applying operations from a prior commit point, e.g. on
> recovering after a crash from a transaction log, they can know
> exactly which operations made it into the commit and which did
> not, and replay only the truly missing operations.
> Not returning this just hurts people who try to build servers on top
> with clear semantics on crashing/recovering ... I also struggled with
> this when building a simple "server wrapper" on top of Lucene
> (LUCENE-5376).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7330) Speed up conjunctions

2016-06-14 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329073#comment-15329073
 ] 

Adrien Grand commented on LUCENE-7330:
--

Nightly benchmarks seem to confirm the speedup is real: 
http://people.apache.org/~mikemccand/lucenebench/AndHighHigh.html

> Speed up conjunctions
> -
>
> Key: LUCENE-7330
> URL: https://issues.apache.org/jira/browse/LUCENE-7330
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7330.patch
>
>
> I am digging into some performance regressions between 4.x and 5.x which seem 
> to be due to how we always run conjunctions with ConjunctionDISI now while 
> 4.x had FilteredQuery, which was optimized for the case that there are only 
> two clauses or that one of the clause supports random access. I'd like to 
> explore the former in this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7302) IndexWriter should tell you the order of indexing operations

2016-06-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329071#comment-15329071
 ] 

ASF subversion and git services commented on LUCENE-7302:
-

Commit 5a0321680fe5e57a17470b824024d5b56a4cbaa4 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5a03216 ]

LUCENE-7302: ensure IW.getMaxCompletedSequenceNumber only reflects a change 
after NRT reader refresh would also see it


> IndexWriter should tell you the order of indexing operations
> 
>
> Key: LUCENE-7302
> URL: https://issues.apache.org/jira/browse/LUCENE-7302
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7032.patch, LUCENE-7132.patch
>
>
> Today, when you use multiple threads to concurrently index, Lucene
> knows the effective order that those operations were applied to the
> index, but doesn't return that information back to you.
> But this is important to know, if you want to build a reliable search
> API on top of Lucene.  Combined with the recently added NRT
> replication (LUCENE-5438) it can be a strong basis for an efficient
> distributed search API.
> I think we should return this information, since we already have it,
> and since it could simplify servers (ES/Solr) on top of Lucene:
>   - They would not require locking preventing the same id from being
> indexed concurrently since they could instead check the returned
> sequence number to know which update "won", for features like
> "realtime get".  (Locking is probably still needed for features
> like optimistic concurrency).
>   - When re-applying operations from a prior commit point, e.g. on
> recovering after a crash from a transaction log, they can know
> exactly which operations made it into the commit and which did
> not, and replay only the truly missing operations.
> Not returning this just hurts people who try to build servers on top
> with clear semantics on crashing/recovering ... I also struggled with
> this when building a simple "server wrapper" on top of Lucene
> (LUCENE-5376).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7276) Add an optional reason to the MatchNoDocsQuery

2016-06-14 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329065#comment-15329065
 ] 

Adrien Grand commented on LUCENE-7276:
--

bq. I don't think we have ever, nor should we ever, make a guarantee that 
MatchNoDocsQuery.toString would somehow round-trip through a query parser back 
to itself, and so I think we are free to improve it here/now.

+1

> Add an optional reason to the MatchNoDocsQuery
> --
>
> Key: LUCENE-7276
> URL: https://issues.apache.org/jira/browse/LUCENE-7276
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
>  Labels: patch
> Attachments: LUCENE-7276.patch, LUCENE-7276.patch, LUCENE-7276.patch, 
> LUCENE-7276.patch, LUCENE-7276.patch
>
>
> It's sometimes difficult to debug a query that results in a MatchNoDocsQuery. 
> The MatchNoDocsQuery is always rewritten in an empty boolean query.
> This patch adds an optional reason and implements a weight in order to keep 
> track of the reason why the query did not match any document. The reason is 
> printed on toString and when an explanation for noMatch is asked.  
> For instance the query:
> new MatchNoDocsQuery("Field not found").toString()
> => 'MatchNoDocsQuery["field 'title' not found"]'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6590) Explore different ways to apply boosts

2016-06-14 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329028#comment-15329028
 ] 

Adrien Grand commented on LUCENE-6590:
--

I tried dowgrading the lucene match version to 4.6 on my local Solr 
installation but this does not help reproduce the problem. I am still 
interested in getting to the bottom of this, especially if other users are 
hitting the same problem, so if you manage to narrow it down to some specific 
configuration changes that would be helpful.

> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.1 - Build # 5 - Still Failing

2016-06-14 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.1/5/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=54476, name=collection5, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=54476, name=collection5, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:55115/_/gv: collection already exists: 
awholynewstresscollection_collection5_4
at __randomizedtesting.SeedInfo.seed([6E65AD8266D76E2D]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:404)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:357)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1228)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1599)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1620)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:987)


FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload

Error Message:
expected:<[{indexVersion=1465885362501,generation=2,filelist=[_2.fdt, _2.fdx, 
_2.fnm, _2.nvd, _2.nvm, _2.si, _2_Lucene50_0.doc, _2_Lucene50_0.tim, 
_2_Lucene50_0.tip, _3.fdt, _3.fdx, _3.fnm, _3.nvd, _3.nvm, _3.si, 
_3_Lucene50_0.doc, _3_Lucene50_0.tim, _3_Lucene50_0.tip, _4.fdt, _4.fdx, 
_4.fnm, _4.nvd, _4.nvm, _4.si, _4_Lucene50_0.doc, _4_Lucene50_0.tim, 
_4_Lucene50_0.tip, _5.cfe, _5.cfs, _5.si, _6.fdt, _6.fdx, _6.fnm, _6.nvd, 
_6.nvm, _6.si, _6_Lucene50_0.doc, _6_Lucene50_0.tim, _6_Lucene50_0.tip, 
segments_2]}]> but 
was:<[{indexVersion=1465885362501,generation=2,filelist=[_2.fdt, _2.fdx, 
_2.fnm, _2.nvd, _2.nvm, _2.si, _2_Lucene50_0.doc, _2_Lucene50_0.tim, 
_2_Lucene50_0.tip, _3.fdt, _3.fdx, _3.fnm, _3.nvd, _3.nvm, _3.si, 
_3_Lucene50_0.doc, _3_Lucene50_0.tim, _3_Lucene50_0.tip, _4.fdt, _4.fdx, 
_4.fnm, _4.nvd, _4.nvm, _4.si, _4_Lucene50_0.doc, _4_Lucene50_0.tim, 
_4_Lucene50_0.tip, _5.cfe, _5.cfs, _5.si, _6.fdt, _6.fdx, _6.fnm, _6.nvd, 
_6.nvm, _6.si, _6_Lucene50_0.doc, _6_Lucene50_0.tim, _6_Lucene50_0.tip, 
segments_2]}, {indexVersion=1465885362501,generation=3,filelist=[_3.fdt, 
_3.fdx, _3.fnm, _3.nvd, _3.nvm, _3.si, _3_Lucene50_0.doc, _3_Lucene50_0.tim, 
_3_Lucene50_0.tip, _5.cfe, _5.cfs, _5.si, _7.cfe, _7.cfs, _7.si, segments_3]}]>

Stack Trace:
java.lang.AssertionError: 
expected:<[{indexVersion=1465885362501,generation=2,filelist=[_2.fdt, _2.fdx, 
_2.fnm, _2.nvd, _2.nvm, _2.si, _2_Lucene50_0.doc, _2_Lucene50_0.tim, 
_2_Lucene50_0.tip, _3.fdt, _3.fdx, _3.fnm, _3.nvd, _3.nvm, _3.si, 
_3_Lucene50_0.doc, _3_Lucene50_0.tim, _3_Lucene50_0.tip, _4.fdt, _4.fdx, 
_4.fnm, _4.nvd, _4.nvm, _4.si, _4_Lucene50_0.doc, _4_Lucene50_0.tim, 
_4_Lucene50_0.tip, _5.cfe, _5.cfs, _5.si, _6.fdt, _6.fdx, _6.fnm, _6.nvd, 
_6.nvm, _6.si, _6_Lucene50_0.doc, _6_Lucene50_0.tim, _6_Lucene50_0.tip, 
segments_2]}]> but 
was:<[{indexVersion=1465885362501,generation=2,filelist=[_2.fdt, _2.fdx, 
_2.fnm, _2.nvd, _2.nvm, _2.si, _2_Lucene50_0.doc, _2_Lucene50_0.tim, 
_2_Lucene50_0.tip, _3.fdt, _3.fdx, _3.fnm, _3.nvd, _3.nvm, _3.si, 
_3_Lucene50_0.doc, _3_Lucene50_0.tim, _3_Lucene50_0.tip, _4.fdt, _4.fdx, 
_4.fnm, _4.nvd, _4.nvm, _4.si, _4_Lucene50_0.doc, _4_Lucene50_0.tim, 
_4_Lucene50_0.tip, _5.cfe, _5.cfs, _5.si, _6.fdt, _6.fdx, _6.fnm, _6.nvd, 
_6.nvm, _6.si, _6_Lucene50_0.doc, _6_Lucene50_0.tim, _6_Lucene50_0.tip, 
segments_2]}, {indexVersion=1465885362501,generation=3,filelist=[_3.fdt, 
_3.fdx, _3.fnm, _3.nvd, _3.nvm, _3.si, _3_Lucene50_0.doc, _3_Lucene50_0.tim, 
_3_Lucene50_0.tip, _5.cfe, _5.cfs, _5.si, _7.cfe, _7.cfs, _7.si, segments_3]}]>
at 
__randomizedtesting.SeedInfo.seed([6E65AD8266D76E2D:4BB2B6B2169F602E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 

Re: [JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3338 - Failure!

2016-06-14 Thread Adrien Grand
I pushed a fix.

Le mar. 14 juin 2016 à 09:15, Policeman Jenkins Server 
a écrit :

> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3338/
> Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC
>
> All tests passed
>
> Build Log:
> [...truncated 63060 lines...]
> -ecj-javadoc-lint-src:
> [mkdir] Created dir:
> /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1484975280
>  [ecj-lint] Compiling 932 source files to
> /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1484975280
>  [ecj-lint] invalid Class-Path header in manifest of jar file:
> /Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
>  [ecj-lint] invalid Class-Path header in manifest of jar file:
> /Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
>  [ecj-lint] --
>  [ecj-lint] 1. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
> (at line 101)
>  [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
>  [ecj-lint]^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda
> shape analysis
>  [ecj-lint] --
>  [ecj-lint] 2. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
> (at line 101)
>  [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
>  [ecj-lint]^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda
> shape analysis
>  [ecj-lint] --
>  [ecj-lint] 3. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
> (at line 101)
>  [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
>  [ecj-lint]^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda
> shape analysis
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 4. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
> (at line 213)
>  [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
>  [ecj-lint]   ^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda
> shape analysis
>  [ecj-lint] --
>  [ecj-lint] 5. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
> (at line 213)
>  [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
>  [ecj-lint]   ^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda
> shape analysis
>  [ecj-lint] --
>  [ecj-lint] 6. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
> (at line 213)
>  [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
>  [ecj-lint]   ^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda
> shape analysis
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 7. ERROR in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
> (at line 34)
>  [ecj-lint] import org.apache.hadoop.fs.FsStatus;
>  [ecj-lint]^
>  [ecj-lint] The import org.apache.hadoop.fs.FsStatus is never used
>  [ecj-lint] --
>  [ecj-lint] 8. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
> (at line 227)
>  [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null,
> blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
>  [ecj-lint]
>  
> ^^
>  [ecj-lint] Resource leak: 'dir' is never closed
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 9. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
> (at line 120)
>  [ecj-lint] reader = cfiltfac.create(reader);
>  [ecj-lint] 
>  [ecj-lint] Resource leak: 'reader' is not closed at this location
>  [ecj-lint] --
>  [ecj-lint] 10. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
> (at line 144)
>  [ecj-lint] return namedList;
>  [ecj-lint] ^
>  [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this
> location
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 11. ERROR in
> 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3338 - Failure!

2016-06-14 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3338/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 63060 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1484975280
 [ecj-lint] Compiling 932 source files to 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1484975280
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 34)
 [ecj-lint] import org.apache.hadoop.fs.FsStatus;
 [ecj-lint]^
 [ecj-lint] The import org.apache.hadoop.fs.FsStatus is never used
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 227)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 144)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
 (at line 79)
 [ecj-lint] import org.apache.solr.core.DirectoryFactory;
 [ecj-lint]^
 [ecj-lint] The import 

Fwd: How to use Query Time Join with Lucene 5.3.0?

2016-06-14 Thread Pravin Thokal
Hello,


I am referring this
link for
usage of query time join and I am able to use following method

   createJoinQuery(String fromField, boolean multipleValuesPerDocument,
String toField, Query fromQuery, IndexSearcher fromSearcher, ScoreMode
scoreMode)

Parameters:
fromField - The from field to join from
multipleValuesPerDocument - Whether the from field has multiple terms per
document
toField - The to field to join to
fromQuery - The query to match documents on the from side
fromSearcher - The searcher that executed the specified fromQuery
scoreMode - Instructs how scores from the fromQuery are mapped to the
returned query
However I would like to use following createJoinQuery() with different
parameters

public static Query createJoinQuery(String joinField,Query fromQuery,
Query toQuery,IndexSearcher searcher,ScoreMode scoreMode,
MultiDocValues.OrdinalMap ordinalMap) throws IOException

joinField - The SortedDocValues field containing the join values
fromQuery - The query containing the actual user query. Also the fromQuery
can only match "from" documents.
toQuery - The query identifying all documents on the "to" side.
searcher - The index searcher used to execute the from query
scoreMode - Instructs how scores from the fromQuery are mapped to the
returned query
ordinalMap - The ordinal map constructed over the joinField. In case of a
single segment index, no ordinal map needs to be provided.
For this methods, I am referring this
link.
I don't have any clue for the parameter ordinalMap and how to create it. It
will be great help if any one explains it with example.

Best Regards,

*Pravin Thokal*

*Senior Product Engineer,*

*SysTools Software Pvt. Ltd.*

202, Pentagon P3, Magarpatta CyberCity, Pune - 411028 , Maharashtra, India.

+91-02-60505558 | www.systoolsgroup.com | www.mailxaminer.com