[jira] [Commented] (LUCENE-6563) MockFileSystemTestCase.testURI should be improved to handle cases where OS/JVM cannot create non-ASCII filenames

2015-07-06 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14616271#comment-14616271
 ] 

Dawid Weiss commented on LUCENE-6563:
-

bq. As a matter of fact, a filename is just a sequence of bytes, it's not even 
a string. 

In the end most things are just a sequence of bytes, Ramkumar :) And seriously, 
standard C didn't have any unicode-related utilities for a looong time (because 
there was no unicode); strings were/are 0-byte terminated byte regions. The 
interpretation of which characters they constitute is a higher-level concept.

bq. LANG=C touch 中国

The question is how does the terminal know how to decode your input above into 
an argument (itself being a byte*)... and how did it know what you typed in 
(and which glyphs to pick in order to display it)... I'm guessing the terminal 
accepts unicode on input then if it sees C locale it  blindly passes the input 
bytes without any conversion at all. The unicode is very likely UTF-8, which 
was specifically designed to be an identity conversion coding page (so that C 
"strings" just work with it) and it just happens to be the default filesystem 
encoding... That's why it works, it just performs no conversion at all... 

It's no magic, really. But trying to understand how and where 
character-to-byte(-to-glyph) conversions occur will drive you nuts because 
there is no consistency here.

> MockFileSystemTestCase.testURI should be improved to handle cases where 
> OS/JVM cannot create non-ASCII filenames
> 
>
> Key: LUCENE-6563
> URL: https://issues.apache.org/jira/browse/LUCENE-6563
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Dawid Weiss
>Priority: Minor
>
> {{ant test -Dtestcase=TestVerboseFS -Dtests.method=testURI 
> -Dtests.file.encoding=UTF-8}} fails (for example) with 'Oracle Corporation 
> 1.8.0_45 (64-bit)' when the default {{sun.jnu.encoding}} system property is 
> (for example) {{ANSI_X3.4-1968}}
> [details to follow]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 732 - Still Failing

2015-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/732/

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=12709, name=collection3, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=12709, name=collection3, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: org.apache.solr.common.SolrException: Error reading cluster 
properties
at __randomizedtesting.SeedInfo.seed([DF132F92C61724CA]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:780)
at 
org.apache.solr.common.cloud.ZkStateReader.getBaseUrlForNodeName(ZkStateReader.java:866)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:985)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:894)
Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: 
KeeperErrorCode = Session expired for /clusterprops.json
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at 
org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkClient.java:319)
at 
org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkClient.java:316)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at 
org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:316)
at 
org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:773)
... 6 more


FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=74616, name=collection4, 
state=RUNNABLE, group=TGRP-HdfsCollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=74616, name=collection4, state=RUNNABLE, 
group=TGRP-HdfsCollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:47562: Could not find collection : 
awholynewstresscollection_collection4_0
at __randomizedtesting.SeedInfo.seed([DF132F92C61724CA]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:894)




Build Log:
[...truncated 10609 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J2/temp/solr.cloud.CollectionsAPIDistributedZkTest_DF132F92C61724CA-001/init-core-data-001
   [junit4]   2> 1223913 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[DF132F92C61724CA]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true)
   [junit4]   2> 1223913 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[DF132F92C61724CA]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /ff/g
   [junit4]   2> 1223917 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[DF132F92C61724CA]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1223918 INFO  (Thread-8171) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1223918 INFO  (Thread-8171) [] o.a.s.c.ZkTestServer 
Starting server

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60-ea-b21) - Build # 13356 - Failure!

2015-07-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13356/
Java: 32bit/jdk1.8.0_60-ea-b21 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
commitWithin did not work on node: http://127.0.0.1:39708/collection1 
expected:<68> but was:<67>

Stack Trace:
java.lang.AssertionError: commitWithin did not work on node: 
http://127.0.0.1:39708/collection1 expected:<68> but was:<67>
at 
__randomizedtesting.SeedInfo.seed([15383A2C337A83D5:9D6C05F69D86EE2D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:332)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtestin

[jira] [Commented] (LUCENE-6658) IndexUpgrader doesn't upgrade an index if it has zero segments

2015-07-06 Thread Trejkaz (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14616117#comment-14616117
 ] 

Trejkaz commented on LUCENE-6658:
-

What does work: making a forceCommit() method which does nothing other than 
increment changeCount and call commit().

> IndexUpgrader doesn't upgrade an index if it has zero segments
> --
>
> Key: LUCENE-6658
> URL: https://issues.apache.org/jira/browse/LUCENE-6658
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10.4, 5.2.1
>Reporter: Trejkaz
>Assignee: Uwe Schindler
> Fix For: 4.10.5, 5.3, Trunk
>
> Attachments: LUCENE-6658.patch, LUCENE-6658.patch, LUCENE-6658.patch, 
> LUCENE-6658.patch, empty.4.10.4.zip
>
>
> IndexUpgrader uses merges to do its job. Therefore, if you use it to upgrade 
> an index with no segments, it will do nothing - it won't even update the 
> version numbers in the segments file, meaning that later versions of Lucene 
> will fail to open the index, despite the fact that you "upgraded" it.
> The suggested workaround when this was raised on the mailing list in January 
> seems to be to use filesystem magic to look at the files, figure out whether 
> there are any segments, and write a new empty index if there are none.
> This sounds easy, but there are probably traps. For instance, there might be 
> files in the directory which don't really belong to the index. Earlier 
> versions of Lucene used to have a FilenameFilter which was usable to 
> distinguish one from the other, but that seems to have disappeared, making it 
> less obvious how to do this.
> This issue is presumed to exist in 3.x as well, I just haven't encountered it 
> yet because the only empty indices I have hit have been later versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr Spell checker for non-english language

2015-07-06 Thread Safat Siddiqui
Hello,

I am using Solr version 4.10.3 and trying to customize it for bangla
language. I have already built a Bangla language stemmer for Solr indexing:
It works fine.

Now I like to use Solr spell checker and suggestion functionality for
Bangla language. Which section in "DirectSolrSpellChecker" should I modify?
I can not find which section is causing the difference between "English"
and "Non-english" language. A direction will be very helpful for me. Thanks
in advance.

Regards,
Safat

-- 
Thanks,
Safat Siddiqui
Student
Department of CSE
Shahjalal University of Science and Technology
Sylhet, Bangladesh.


[jira] [Commented] (LUCENE-6658) IndexUpgrader doesn't upgrade an index if it has zero segments

2015-07-06 Thread Trejkaz (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14616111#comment-14616111
 ] 

Trejkaz commented on LUCENE-6658:
-

This works when applied to my v4 and v5 here as well. (At least for my 
inadequate collection of test indices... since I started generating them today.)

I tried to backport into my copy of v3.6, but IndexWriter.setCommitUserData 
doesn't exist and commit(Map) doesn't force a commit, even with a non-empty 
map. Or if it does, it doesn't update the index format.


> IndexUpgrader doesn't upgrade an index if it has zero segments
> --
>
> Key: LUCENE-6658
> URL: https://issues.apache.org/jira/browse/LUCENE-6658
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10.4, 5.2.1
>Reporter: Trejkaz
>Assignee: Uwe Schindler
> Fix For: 4.10.5, 5.3, Trunk
>
> Attachments: LUCENE-6658.patch, LUCENE-6658.patch, LUCENE-6658.patch, 
> LUCENE-6658.patch, empty.4.10.4.zip
>
>
> IndexUpgrader uses merges to do its job. Therefore, if you use it to upgrade 
> an index with no segments, it will do nothing - it won't even update the 
> version numbers in the segments file, meaning that later versions of Lucene 
> will fail to open the index, despite the fact that you "upgraded" it.
> The suggested workaround when this was raised on the mailing list in January 
> seems to be to use filesystem magic to look at the files, figure out whether 
> there are any segments, and write a new empty index if there are none.
> This sounds easy, but there are probably traps. For instance, there might be 
> files in the directory which don't really belong to the index. Earlier 
> versions of Lucene used to have a FilenameFilter which was usable to 
> distinguish one from the other, but that seems to have disappeared, making it 
> less obvious how to do this.
> This issue is presumed to exist in 3.x as well, I just haven't encountered it 
> yet because the only empty indices I have hit have been later versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3393) Implement an optimized LFUCache

2015-07-06 Thread Bill Bell (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14616110#comment-14616110
 ] 

Bill Bell commented on SOLR-3393:
-

Let's get this done. What is remaining? O(1) sounds great.

> Implement an optimized LFUCache
> ---
>
> Key: SOLR-3393
> URL: https://issues.apache.org/jira/browse/SOLR-3393
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 3.6, 4.0-ALPHA
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-3393-4x-withdecay.patch, 
> SOLR-3393-trunk-withdecay.patch, SOLR-3393.patch, SOLR-3393.patch, 
> SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, 
> SOLR-3393.patch
>
>
> SOLR-2906 gave us an inefficient LFU cache modeled on 
> FastLRUCache/ConcurrentLRUCache.  It could use some serious improvement.  The 
> following project includes an Apache 2.0 licensed O(1) implementation.  The 
> second link is the paper (PDF warning) it was based on:
> https://github.com/chirino/hawtdb
> http://dhruvbird.com/lfu.pdf
> Using this project and paper, I will attempt to make a new O(1) cache called 
> FastLFUCache that is modeled on LRUCache.java.  This will (for now) leave the 
> existing LFUCache/ConcurrentLFUCache implementation in place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7441) Improve overall robustness of the Streaming stack: Streaming API, Streaming Expressions, Parallel SQL

2015-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14616068#comment-14616068
 ] 

ASF subversion and git services commented on SOLR-7441:
---

Commit 1689559 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1689559 ]

SOLR-7441: Disable failing test

> Improve overall robustness of the Streaming stack: Streaming API, Streaming 
> Expressions, Parallel SQL
> -
>
> Key: SOLR-7441
> URL: https://issues.apache.org/jira/browse/SOLR-7441
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.1
>Reporter: Erick Erickson
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7441.patch, SOLR-7441.patch, SOLR-7441.patch, 
> SOLR-7441.patch
>
>
> It's harder than it could be to figure out what the error is when using 
> Streaming Aggregation. For instance if you specify an fl parameter for a 
> field that doesn't exist it's hard to figure out that's the cause. This is 
> true even if you look in the Solr logs.
> I'm not quite sure whether it'd be possible to report this at the client 
> level or not, but it seems at least we could repor something more helpful in 
> the Solr logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7143) MoreLikeThis Query Parser does not handle multiple field names

2015-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14616045#comment-14616045
 ] 

ASF subversion and git services commented on SOLR-7143:
---

Commit 1689556 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1689556 ]

SOLR-7143: MoreLikeThis Query parser now handles multiple field names (merge 
from trunk)

> MoreLikeThis Query Parser does not handle multiple field names
> --
>
> Key: SOLR-7143
> URL: https://issues.apache.org/jira/browse/SOLR-7143
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 5.0
>Reporter: Jens Wille
>Assignee: Anshum Gupta
> Attachments: SOLR-7143.patch, SOLR-7143.patch, SOLR-7143.patch, 
> SOLR-7143.patch, SOLR-7143.patch
>
>
> The newly introduced MoreLikeThis Query Parser (SOLR-6248) does not return 
> any results when supplied with multiple fields in the {{qf}} parameter.
> To reproduce within the techproducts example, compare:
> {code}
> curl 
> 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name%7DMA147LL/A'
> curl 
> 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=features%7DMA147LL/A'
> curl 
> 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name,features%7DMA147LL/A'
> {code}
> The first two queries return 8 and 5 results, respectively. The third query 
> doesn't return any results (not even the matched document).
> In contrast, the MoreLikeThis Handler works as expected (accounting for the 
> default {{mintf}} and {{mindf}} values in SimpleMLTQParser):
> {code}
> curl 
> 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/A&mlt.fl=name&mlt.mintf=1&mlt.mindf=1'
> curl 
> 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/A&mlt.fl=features&mlt.mintf=1&mlt.mindf=1'
> curl 
> 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/A&mlt.fl=name,features&mlt.mintf=1&mlt.mindf=1'
> {code}
> After adding the following line to 
> {{example/techproducts/solr/techproducts/conf/solrconfig.xml}}:
> {code:language=XML}
> 
> {code}
> The first two queries return 7 and 4 results, respectively (excluding the 
> matched document). The third query returns 7 results, as one would expect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_45) - Build # 13354 - Failure!

2015-07-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13354/
Java: 64bit/jdk1.8.0_45 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamingTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.io.stream.StreamingTest: 1) Thread[id=290, 
name=TEST-StreamingTest.streamTests-seed#[A11B091F5DECB4F4]-SendThread(127.0.0.1:59601),
 state=TIMED_WAITING, group=TGRP-StreamingTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)
2) Thread[id=292, name=zkCallback-90-thread-1, state=TIMED_WAITING, 
group=TGRP-StreamingTest] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)  
   at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=321, 
name=zkCallback-90-thread-2, state=TIMED_WAITING, group=TGRP-StreamingTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=291, 
name=TEST-StreamingTest.streamTests-seed#[A11B091F5DECB4F4]-EventThread, 
state=WAITING, group=TGRP-StreamingTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
5) Thread[id=322, name=zkCallback-90-thread-3, state=TIMED_WAITING, 
group=TGRP-StreamingTest] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)  
   at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.client.solrj.io.stream.StreamingTest: 
   1) Thread[id=290, 
name=TEST-StreamingTest.streamTests-seed#[A11B091F5DECB4F4]-SendThread(127.0.0.1:59601),
 state=TIMED_WAITING, group=TGRP-StreamingTest]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)
   2) Thread[id=292, name=zkCallback-90-thread-1, state=TIMED_WAITING, 
group=TGRP-StreamingTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_45) - Build # 4999 - Failure!

2015-07-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4999/
Java: 64bit/jdk1.8.0_45 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   "collection1":{ "replicationFactor":"1", "shards":{   
"shard1":{ "range":"8000-", "state":"active",   
  "replicas":{"core_node2":{ "core":"collection1", 
"base_url":"http://127.0.0.1:61810/jaw";, 
"node_name":"127.0.0.1:61810_jaw", "state":"active", 
"leader":"true"}}},   "shard2":{ "range":"0-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:61790/jaw";,  
   "node_name":"127.0.0.1:61790_jaw", "state":"active", 
"leader":"true"},   "core_node3":{ 
"core":"collection1", "base_url":"http://127.0.0.1:61835/jaw";,  
   "node_name":"127.0.0.1:61835_jaw", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "autoCreated":"true"},   "control_collection":{  
   "replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", 
"replicas":{"core_node1":{ "core":"collection1", 
"base_url":"http://127.0.0.1:61774/jaw";, 
"node_name":"127.0.0.1:61774_jaw", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "c8n_1x2":{ "replicationFactor":"2", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:61774/jaw";, 
"node_name":"127.0.0.1:61774_jaw", "state":"recovering"},   
"core_node2":{ "core":"c8n_1x2_shard1_replica2", 
"base_url":"http://127.0.0.1:61810/jaw";, 
"node_name":"127.0.0.1:61810_jaw", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  "collection1":{
"replicationFactor":"1",
"shards":{
  "shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node2":{
"core":"collection1",
"base_url":"http://127.0.0.1:61810/jaw";,
"node_name":"127.0.0.1:61810_jaw",
"state":"active",
"leader":"true"}}},
  "shard2":{
"range":"0-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:61790/jaw";,
"node_name":"127.0.0.1:61790_jaw",
"state":"active",
"leader":"true"},
  "core_node3":{
"core":"collection1",
"base_url":"http://127.0.0.1:61835/jaw";,
"node_name":"127.0.0.1:61835_jaw",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  "control_collection":{
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:61774/jaw";,
"node_name":"127.0.0.1:61774_jaw",
"state":"active",
"leader":"true",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  "c8n_1x2":{
"replicationFactor":"2",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"c8n_1x2_shard1_replica1",
"base_url":"http://127.0.0.1:61774/jaw";,
"node_name":"127.0.0.1:61774_jaw",
"state":"recovering"},
  "core_node2":{
"core":"c8n_1x2_shard1_replica2",
"base_url":"http://127.0.0.1:61810/jaw";,
"node_name":"127.0.0.1:61810_jaw",
"state":"active",
"leader":"true",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false"}}
at 
__randomizedtesting.SeedInfo.seed([401F676F0DCBE713:C84B58B5A3378AEB]:0)
at org.junit.Assert.fail(Assert.java:93)
 

[jira] [Updated] (LUCENE-6663) Null value dereference

2015-07-06 Thread Rishabh Patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rishabh Patel updated LUCENE-6663:
--
Attachment: LUCENE-6663.patch

> Null value dereference
> --
>
> Key: LUCENE-6663
> URL: https://issues.apache.org/jira/browse/LUCENE-6663
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Rishabh Patel
> Fix For: Trunk
>
> Attachments: LUCENE-6663.patch
>
>
> Found several cases of potential null dereference. Creating a single patch as 
> suggested to fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7143) MoreLikeThis Query Parser does not handle multiple field names

2015-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615931#comment-14615931
 ] 

ASF subversion and git services commented on SOLR-7143:
---

Commit 1689531 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1689531 ]

SOLR-7143: MoreLikeThis Query parser now handles multiple field names

> MoreLikeThis Query Parser does not handle multiple field names
> --
>
> Key: SOLR-7143
> URL: https://issues.apache.org/jira/browse/SOLR-7143
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 5.0
>Reporter: Jens Wille
>Assignee: Anshum Gupta
> Attachments: SOLR-7143.patch, SOLR-7143.patch, SOLR-7143.patch, 
> SOLR-7143.patch, SOLR-7143.patch
>
>
> The newly introduced MoreLikeThis Query Parser (SOLR-6248) does not return 
> any results when supplied with multiple fields in the {{qf}} parameter.
> To reproduce within the techproducts example, compare:
> {code}
> curl 
> 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name%7DMA147LL/A'
> curl 
> 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=features%7DMA147LL/A'
> curl 
> 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name,features%7DMA147LL/A'
> {code}
> The first two queries return 8 and 5 results, respectively. The third query 
> doesn't return any results (not even the matched document).
> In contrast, the MoreLikeThis Handler works as expected (accounting for the 
> default {{mintf}} and {{mindf}} values in SimpleMLTQParser):
> {code}
> curl 
> 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/A&mlt.fl=name&mlt.mintf=1&mlt.mindf=1'
> curl 
> 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/A&mlt.fl=features&mlt.mintf=1&mlt.mindf=1'
> curl 
> 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/A&mlt.fl=name,features&mlt.mintf=1&mlt.mindf=1'
> {code}
> After adding the following line to 
> {{example/techproducts/solr/techproducts/conf/solrconfig.xml}}:
> {code:language=XML}
> 
> {code}
> The first two queries return 7 and 4 results, respectively (excluding the 
> matched document). The third query returns 7 results, as one would expect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6663) Null value dereference

2015-07-06 Thread Rishabh Patel (JIRA)
Rishabh Patel created LUCENE-6663:
-

 Summary: Null value dereference
 Key: LUCENE-6663
 URL: https://issues.apache.org/jira/browse/LUCENE-6663
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: Trunk
Reporter: Rishabh Patel
 Fix For: Trunk


Found several cases of potential null dereference. Creating a single patch as 
suggested to fix them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [Solr Wiki] Trivial Update of "SolrCaching" by KonstantinGribov

2015-07-06 Thread Konstantin Gribov
Yes, version-agnostic URL is much better. I haven't found it since I've
found only direct links to solr javadoc for earlier versions. I will update
it now.

Changing redirection from 5_2_0 to 5_2_1 would be nice but isn't necessary
(if 5.2.1 api is similar to 5.2.0).

Also, is there any page on lucene/solr site where direct links to javadoc
are present? I didn't find one.


вт, 7 июля 2015 г. в 1:49, Shawn Heisey :

> On 7/6/2015 3:58 PM, Apache Wiki wrote:
> > - The "keys" of the cache are field names, and the values are [[
> https://lucene.apache.org/solr/api/org/apache/solr/request/UnInvertedField.html|large
> data structures mapping docIds to values]].
> > + The "keys" of the cache are field names, and the values are [[
> https://lucene.apache.org/solr/5_2_1/solr-core/org/apache/solr/search/facet/UnInvertedField.html|large
> data structures mapping docIds to values]].
>
> I'm thinking that the version number probably should not be part of this
> URL.  It looks like the package of this class changed from search to
> search.facet.
>
> I think the URL probably should point here:
>
>
> https://lucene.apache.org/solr/api/solr-core/org/apache/solr/search/facet/UnInvertedField.html
>
> And that the redirect should be changed to go to the 5_2_1 version
> instead of 5_2_0.
>
> Would that be correct?
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Best regards,
Konstantin Gribov


[jira] [Closed] (LUCENE-6610) Potential resource leak in WordDictionary.java

2015-07-06 Thread Rishabh Patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rishabh Patel closed LUCENE-6610.
-
Resolution: Duplicate

> Potential resource leak in WordDictionary.java
> --
>
> Key: LUCENE-6610
> URL: https://issues.apache.org/jira/browse/LUCENE-6610
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: Trunk
>Reporter: Rishabh Patel
>Priority: Minor
>  Labels: github-pullrequest
>
> In the file {{WordDictionary.java}}, the input and output object stream might 
> not get closed upon an exception. 
> Fix with try-with-resources construct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-6611) Potential resource leakage in DirectoryTaxonomyWriter.java

2015-07-06 Thread Rishabh Patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rishabh Patel closed LUCENE-6611.
-
Resolution: Duplicate

> Potential resource leakage in DirectoryTaxonomyWriter.java
> --
>
> Key: LUCENE-6611
> URL: https://issues.apache.org/jira/browse/LUCENE-6611
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/facet
>Affects Versions: Trunk
>Reporter: Rishabh Patel
>Priority: Minor
>  Labels: github-pullrequest
>
> The resource 'in' is closed in an unsafe manner, potentially leading to 
> resource leak. 
> It can be fixed by using the try-with-resources construct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-6612) Resource leak

2015-07-06 Thread Rishabh Patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rishabh Patel closed LUCENE-6612.
-
Resolution: Duplicate

> Resource leak
> -
>
> Key: LUCENE-6612
> URL: https://issues.apache.org/jira/browse/LUCENE-6612
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: Trunk
>Reporter: Rishabh Patel
>Priority: Minor
>  Labels: github-pullrequest
>
> In the file {{JaspellTernarySearchTrie.java}}, the resouce {{BufferedReader 
> in}} could be leaked upon exception.
> It can be fixed with a try-finally block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-Artifacts-5.x - Build # 876 - Failure

2015-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-5.x/876/

No tests ran.

Build Log:
[...truncated 36085 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/build.xml:438: 
Can't get http://people.apache.org/keys/group/lucene.asc to 
/x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-5.x/solr/package/KEYS

Total time: 9 minutes 10 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Solr-Artifacts-5.x #875
Archived 13 artifacts
Archive block size is 32768
Received 2282 blocks and 272105271 bytes
Compression is 21.6%
Took 1 min 24 sec
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Closed] (LUCENE-6619) Resource leak

2015-07-06 Thread Rishabh Patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rishabh Patel closed LUCENE-6619.
-
Resolution: Duplicate

> Resource leak
> -
>
> Key: LUCENE-6619
> URL: https://issues.apache.org/jira/browse/LUCENE-6619
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: Trunk
>Reporter: Rishabh Patel
>Priority: Minor
>  Labels: github-pullrequest
>
> In {{Compile.java}}, the resource {{LineNumberReader in}} is not closed 
> correctly.
> It can be fixed by using try-with-resources construct



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6622) Resource leak in DiffIt.java

2015-07-06 Thread Rishabh Patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rishabh Patel resolved LUCENE-6622.
---
Resolution: Duplicate

> Resource leak in DiffIt.java
> 
>
> Key: LUCENE-6622
> URL: https://issues.apache.org/jira/browse/LUCENE-6622
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: Trunk
>Reporter: Rishabh Patel
>Priority: Minor
>  Labels: github-pullrequest
>
> In the file {{DiffIt.java}}, {{LineNumberReader in}} is not closed.
> Fix by adding try-finally construct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6661) Allow queries to opt out of caching

2015-07-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615872#comment-14615872
 ] 

Hoss Man commented on LUCENE-6661:
--

bq. Given that the cache only caches queries that are reused, it will never be 
cached.

If i'm understanding your example, then I think you mean "rewritten" not 
"reused" in that sentence ... correct?  (Otherwise how does the cache know if i 
plan on "reusing" a specific Query instance vs constructing many instances with 
will all be ".equals()")

---

Assuming i understand your example correctly: This won't actually prevent a 
query from getting put in the cache, it will only prevent there from being 
cache hits -- correct?

If someone does a MyQuery search 1000 times (regardless of wether it's 100 diff 
instances or 1 instance reused 1000 times), won't that be be a 1000 "inserts" 
into the cache (potentially causing other things to be evicted from the cache) 
that will never be of any use?



> Allow queries to opt out of caching
> ---
>
> Key: LUCENE-6661
> URL: https://issues.apache.org/jira/browse/LUCENE-6661
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.2
>Reporter: Terry Smith
>Priority: Minor
> Attachments: LUCENE-6661.patch
>
>
> Some queries have out-of-band dependencies that make them incompatible with 
> caching, it'd be great if they could opt out of the new fancy query/filter 
> cache in IndexSearcher.
> This affects DrillSidewaysQuery and any user-provided custom Query 
> implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6662) Resource Leaks

2015-07-06 Thread Rishabh Patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rishabh Patel updated LUCENE-6662:
--
Attachment: LUCENE-6662.patch

> Resource Leaks
> --
>
> Key: LUCENE-6662
> URL: https://issues.apache.org/jira/browse/LUCENE-6662
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Rishabh Patel
>Priority: Critical
> Fix For: Trunk
>
> Attachments: LUCENE-6662.patch
>
>
> Several resource leaks were identified. I am merging all resource leak issues 
> and creating a single patch as suggested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6662) Resource Leaks

2015-07-06 Thread Rishabh Patel (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rishabh Patel updated LUCENE-6662:
--
Flags: Patch
Lucene Fields: New,Patch Available  (was: New)

> Resource Leaks
> --
>
> Key: LUCENE-6662
> URL: https://issues.apache.org/jira/browse/LUCENE-6662
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Rishabh Patel
>Priority: Critical
> Fix For: Trunk
>
> Attachments: LUCENE-6662.patch
>
>
> Several resource leaks were identified. I am merging all resource leak issues 
> and creating a single patch as suggested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7748) Fix bin/solr to work on IBM J9

2015-07-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615851#comment-14615851
 ] 

Shawn Heisey commented on SOLR-7748:


I'm going to assume that the way the vendor and version are detected are 
correct, since I don't have an IBM JVM to try.

I did notice in the bash script that the non-IBM option construction included 
an echo command to include the existing GC_LOG_OPTS variable, but for J9, it 
sets the variable and doesn't include what's there previously.  Should that be 
modified to match?  The removed line (replaced with the if/else construct) uses 
parentheses to combine the old with the new, perhaps that method should be used 
in the two new statements.  I'm not an expert either.


> Fix bin/solr to work on IBM J9
> --
>
> Key: SOLR-7748
> URL: https://issues.apache.org/jira/browse/SOLR-7748
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Reporter: Shai Erera
>Assignee: Shai Erera
> Fix For: 5.3, Trunk
>
> Attachments: SOLR-7748.patch, SOLR-7748.patch, solr-7748.patch
>
>
> bin/solr doesn't work on IBM J9 because it sets -Xloggc flag, while J9 
> supports -Xverbosegclog. This prevents using bin/solr to start it on J9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [Solr Wiki] Trivial Update of "SolrCaching" by KonstantinGribov

2015-07-06 Thread Shawn Heisey
On 7/6/2015 3:58 PM, Apache Wiki wrote:
> - The "keys" of the cache are field names, and the values are 
> [[https://lucene.apache.org/solr/api/org/apache/solr/request/UnInvertedField.html|large
>  data structures mapping docIds to values]].
> + The "keys" of the cache are field names, and the values are 
> [[https://lucene.apache.org/solr/5_2_1/solr-core/org/apache/solr/search/facet/UnInvertedField.html|large
>  data structures mapping docIds to values]].

I'm thinking that the version number probably should not be part of this
URL.  It looks like the package of this class changed from search to
search.facet.

I think the URL probably should point here:

https://lucene.apache.org/solr/api/solr-core/org/apache/solr/search/facet/UnInvertedField.html

And that the redirect should be changed to go to the 5_2_1 version
instead of 5_2_0.

Would that be correct?

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6662) Resource Leaks

2015-07-06 Thread Rishabh Patel (JIRA)
Rishabh Patel created LUCENE-6662:
-

 Summary: Resource Leaks
 Key: LUCENE-6662
 URL: https://issues.apache.org/jira/browse/LUCENE-6662
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: Trunk
Reporter: Rishabh Patel
Priority: Critical
 Fix For: Trunk


Several resource leaks were identified. I am merging all resource leak issues 
and creating a single patch as suggested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6563) MockFileSystemTestCase.testURI should be improved to handle cases where OS/JVM cannot create non-ASCII filenames

2015-07-06 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615766#comment-14615766
 ] 

Ramkumar Aiyengar commented on LUCENE-6563:
---

bq. I honestly think this is a quirk/bug in the JVM... and perhaps should be 
reported. Setting LANG=C shouldn't be affecting how filenames are (mis)handled 
(and it currently does).

I digged a bit, and it appears that Java is kind of doing the right thing given 
it's current API. Certain newer versions/FSs of Windows and MacOSX guarantee 
that all files are in UTF-16/8 respectively. Linux/Solaris etc. (aka the more 
traditional Unix systems) tend not to care about the encoding at all. As a 
matter of fact, a filename is just a sequence of bytes, it's not even a string. 
How that byte array is displayed comes down to the locale. This is probably why 
this works.

{code}
$ LANG=C touch 中国
{code}

touch doesn't care about the input, the shell maps it into a set of UTF-8 
bytes, and is stored as a filename. ls, then, in an UTF-8 locale shows the 
correct thing

{code}
$ ls
build.xml  ivy.xml  lib  lucene-test-framework.iml  src  中国
{code}

And if I try to read in the C locale, I get a bunch of unreadable chars

{code}
$ LANG=C ls
build.xml  ivy.xml  lib  lucene-test-framework.iml  src  ??
{code}

Java APIs on the other hand treats filenames as strings in all it's APIs, and 
as a result is forced to need an encoding even when it is using it only for I/O 
and not for display. As a result, it is forced to choose some encoding, and it 
goes with the locale.. In platforms where the filename encoding is guaranteed 
to be UTF-8, it goes with that -- see 
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8003228 for MacOSX.

Looks like this issue is not specific to Java -- see 
https://bugs.python.org/issue19846 this for example.


> MockFileSystemTestCase.testURI should be improved to handle cases where 
> OS/JVM cannot create non-ASCII filenames
> 
>
> Key: LUCENE-6563
> URL: https://issues.apache.org/jira/browse/LUCENE-6563
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Dawid Weiss
>Priority: Minor
>
> {{ant test -Dtestcase=TestVerboseFS -Dtests.method=testURI 
> -Dtests.file.encoding=UTF-8}} fails (for example) with 'Oracle Corporation 
> 1.8.0_45 (64-bit)' when the default {{sun.jnu.encoding}} system property is 
> (for example) {{ANSI_X3.4-1968}}
> [details to follow]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 158 - Failure

2015-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/158/

3 tests failed.
REGRESSION:  org.apache.solr.client.solrj.io.stream.StreamingTest.streamTests

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([DFD27D87D0DE3FC0:23895A037AF7C6B5]:0)
at 
org.apache.solr.client.solrj.io.stream.SolrStream.close(SolrStream.java:143)
at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.close(CloudSolrStream.java:348)
at 
org.apache.solr.client.solrj.io.stream.ExceptionStream.close(ExceptionStream.java:78)
at 
org.apache.solr.client.solrj.io.stream.StreamingTest.getTuple(StreamingTest.java:1158)
at 
org.apache.solr.client.solrj.io.stream.StreamingTest.testExceptionStream(StreamingTest.java:511)
at 
org.apache.solr.client.solrj.io.stream.StreamingTest.streamTests(StreamingTest.java:1114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.j

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60-ea-b21) - Build # 13351 - Failure!

2015-07-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13351/
Java: 32bit/jdk1.8.0_60-ea-b21 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [TransactionLog]
at __randomizedtesting.SeedInfo.seed([4C6C4C0E5C0CB721]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:235)
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10825 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_4C6C4C0E5C0CB721-001/init-core-data-001
   [junit4]   2> 1297025 INFO  
(SUITE-HttpPartitionTest-seed#[4C6C4C0E5C0CB721]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 1297028 INFO  
(TEST-HttpPartitionTest.test-seed#[4C6C4C0E5C0CB721]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1297028 INFO  (Thread-3884) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1297028 INFO  (Thread-3884) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1297128 INFO  
(TEST-HttpPartitionTest.test-seed#[4C6C4C0E5C0CB721]) [] 
o.a.s.c.ZkTestServer start zk server on port:42745
   [junit4]   2> 1297128 INFO  
(TEST-HttpPartitionTest.test-seed#[4C6C4C0E5C0CB721]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1297129 INFO  
(TEST-HttpPartitionTest.test-seed#[4C6C4C0E5C0CB721]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1297131 INFO  (zkCallback-1018-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@1d13ca2 name:ZooKeeperConnection 
Watcher:127.0.0.1:42745 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 1297132 INFO  
(TEST-HttpPartitionTest.test-seed#[4C6C4C0E5C0CB721]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 1297132 INFO  
(TEST-HttpPartitionTest.test-seed#[4C6C4C0E5C0CB721]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 1297132 INFO  
(TEST-HttpPartitionTest.test-seed#[4C6C4C0E5C0CB721]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [jun

[JENKINS] Lucene-Artifacts-5.x - Build # 897 - Failure

2015-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-5.x/897/

No tests ran.

Build Log:
[...truncated 12698 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-5.x/lucene/build.xml:391: 
Can't get http://people.apache.org/keys/group/lucene.asc to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-5.x/lucene/dist/KEYS

Total time: 4 minutes 18 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Artifacts-5.x #896
Archived 13 artifacts
Archive block size is 32768
Received 1229 blocks and 138867860 bytes
Compression is 22.5%
Took 39 sec
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_45) - Build # 4875 - Failure!

2015-07-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4875/
Java: 64bit/jdk1.8.0_45 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([5928B6050CC50620:B0720D3D925C9688]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:770)
at 
org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir(TestArbitraryIndexDir.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=*[count(//doc)=1]
xml response was: 

00


request was:q=id:2&qt=standard&start=0&rows=20&version=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:763)
... 40 more




Build Log:
[...truncated 10185 lines...]
   [junit4] Suite:

[jira] [Commented] (LUCENE-6661) Allow queries to opt out of caching

2015-07-06 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615703#comment-14615703
 ] 

Adrien Grand commented on LUCENE-6661:
--

One issue I have with marker interfaces is that they do not support wrapping. 
Eg. if you put such a query in a BooleanQuery, then the BooleanQuery would be 
considered cacheable although it should not be cached either.

One way to work around this issue would be to make a query that is not equal to 
any other query but itself? Eg:

{code}
class MyQuery extends Query {

  private Object identity = null;

  boolean equals(Object o) {
if (super.equals(o) == false) {
  return false;
}
MyQuery that = (MyQuery) o;
return identity == that.identity;
  }

  int hashcode() {
return 31 * super.hashcode() + Objects.hashcode(identity);
  }

  Weight createWeight(IndexSearcher searcher, boolean needsScores) {
// create a query that will be equal to no other query
// given that we use Weight.getQuery() for caching
Query weightQuery = clone();
weightQuery.identity = new Object();
return new Weight(weightQuery) {
  // weight impl
};
  }

}
{code}

Given that the cache only caches queries that are reused, it will never be 
cached.

> Allow queries to opt out of caching
> ---
>
> Key: LUCENE-6661
> URL: https://issues.apache.org/jira/browse/LUCENE-6661
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.2
>Reporter: Terry Smith
>Priority: Minor
> Attachments: LUCENE-6661.patch
>
>
> Some queries have out-of-band dependencies that make them incompatible with 
> caching, it'd be great if they could opt out of the new fancy query/filter 
> cache in IndexSearcher.
> This affects DrillSidewaysQuery and any user-provided custom Query 
> implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6661) Allow queries to opt out of caching

2015-07-06 Thread Terry Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Terry Smith updated LUCENE-6661:

Attachment: LUCENE-6661.patch

Rather than add a new method to Query/Weight this feature I've added a small 
marker interface and an instanceof check to prototype this feature.

If this is of interest we should decide whether Query, Weight, or both could 
implement this interface to disable caching.


> Allow queries to opt out of caching
> ---
>
> Key: LUCENE-6661
> URL: https://issues.apache.org/jira/browse/LUCENE-6661
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 5.2
>Reporter: Terry Smith
>Priority: Minor
> Attachments: LUCENE-6661.patch
>
>
> Some queries have out-of-band dependencies that make them incompatible with 
> caching, it'd be great if they could opt out of the new fancy query/filter 
> cache in IndexSearcher.
> This affects DrillSidewaysQuery and any user-provided custom Query 
> implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6661) Allow queries to opt out of caching

2015-07-06 Thread Terry Smith (JIRA)
Terry Smith created LUCENE-6661:
---

 Summary: Allow queries to opt out of caching
 Key: LUCENE-6661
 URL: https://issues.apache.org/jira/browse/LUCENE-6661
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 5.2
Reporter: Terry Smith
Priority: Minor


Some queries have out-of-band dependencies that make them incompatible with 
caching, it'd be great if they could opt out of the new fancy query/filter 
cache in IndexSearcher.

This affects DrillSidewaysQuery and any user-provided custom Query 
implementations.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6563) MockFileSystemTestCase.testURI should be improved to handle cases where OS/JVM cannot create non-ASCII filenames

2015-07-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615649#comment-14615649
 ] 

Uwe Schindler commented on LUCENE-6563:
---

Fine, thanks. Waiting for patch!

> MockFileSystemTestCase.testURI should be improved to handle cases where 
> OS/JVM cannot create non-ASCII filenames
> 
>
> Key: LUCENE-6563
> URL: https://issues.apache.org/jira/browse/LUCENE-6563
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Dawid Weiss
>Priority: Minor
>
> {{ant test -Dtestcase=TestVerboseFS -Dtests.method=testURI 
> -Dtests.file.encoding=UTF-8}} fails (for example) with 'Oracle Corporation 
> 1.8.0_45 (64-bit)' when the default {{sun.jnu.encoding}} system property is 
> (for example) {{ANSI_X3.4-1968}}
> [details to follow]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7632) Change the ExtractingRequestHandler to use Tika-Server

2015-07-06 Thread Chris A. Mattmann (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615559#comment-14615559
 ] 

Chris A. Mattmann commented on SOLR-7632:
-

Thanks I haven't lost track - I was just thinking of this today! :-)

Hopefully will have a PR that I can submit in the next few days.

> Change the ExtractingRequestHandler to use Tika-Server
> --
>
> Key: SOLR-7632
> URL: https://issues.apache.org/jira/browse/SOLR-7632
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Chris A. Mattmann
>  Labels: memex
>
> It's a pain to upgrade Tika's jars all the times when we release, and if Tika 
> fails it messes up the ExtractingRequestHandler (e.g., the document type 
> caused Tika to fail, etc). A more reliable way and also separated, and easier 
> to deploy version of the ExtractingRequestHandler would make a network call 
> to the Tika JAXRS server, and then call Tika on the Solr server side, get the 
> results and then index the information that way. I have a patch in the works 
> from the DARPA Memex project and I hope to post it soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7632) Change the ExtractingRequestHandler to use Tika-Server

2015-07-06 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615556#comment-14615556
 ] 

Alexandre Rafalovitch commented on SOLR-7632:
-

Just wanted to follow-up on this. What's the Github repo for this? It is a good 
idea, so would be nice not to loose the track of it.

> Change the ExtractingRequestHandler to use Tika-Server
> --
>
> Key: SOLR-7632
> URL: https://issues.apache.org/jira/browse/SOLR-7632
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: Chris A. Mattmann
>  Labels: memex
>
> It's a pain to upgrade Tika's jars all the times when we release, and if Tika 
> fails it messes up the ExtractingRequestHandler (e.g., the document type 
> caused Tika to fail, etc). A more reliable way and also separated, and easier 
> to deploy version of the ExtractingRequestHandler would make a network call 
> to the Tika JAXRS server, and then call Tika on the Solr server side, get the 
> results and then index the information that way. I have a patch in the works 
> from the DARPA Memex project and I hope to post it soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7761) Adding functionality to FunctionValues to support filling external MutableValues and having multiple ValueFillers.

2015-07-06 Thread Houston Putman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman updated SOLR-7761:
-
Attachment: SOLR-7761.patch

> Adding functionality to FunctionValues to support filling external 
> MutableValues and having multiple ValueFillers.
> --
>
> Key: SOLR-7761
> URL: https://issues.apache.org/jira/browse/SOLR-7761
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.10.4
>Reporter: Houston Putman
>Priority: Minor
>  Labels: patch
> Attachments: SOLR-7761.patch
>
>
> This is mostly a Lucene change that affects some Solr code, so I made the 
> issue here. If the issue needs to also be made in Lucene, that can be done. 
> Overall this adds the functionality to FunctionValues so that they can fill a 
> given MutableValue. This allows functions that have an input and output of 
> the same type, like IF, to have generic ValueSources without the need for 
> individual sources for every type. This change also gives the ability to make 
> ValueFillers for given MutableValues. Therefore MutableValues don't need to 
> be created for every ValueFiller and can be re-used. 
> Originally this change was made in order to increase performance by recycling 
> MutbaleValues. So that one could keep track of a MutableValue and fill it 
> without ever changing the reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7761) Adding functionality to FunctionValues to support filling external MutableValues and having multiple ValueFillers.

2015-07-06 Thread Houston Putman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman updated SOLR-7761:
-
Description: 
This is mostly a Lucene change that affects some Solr code, so I made the issue 
here. If the issue needs to also be made in Lucene, that can be done. 

Overall this adds the functionality to FunctionValues so that they can fill a 
given MutableValue. This allows functions that have an input and output of the 
same type, like IF, to have generic ValueSources without the need for 
individual sources for every type. This change also gives the ability to make 
ValueFillers for given MutableValues. Therefore MutableValues don't need to be 
created for every ValueFiller and can be re-used. 

Originally this change was made in order to increase performance by recycling 
MutbaleValues. So that one could keep track of a MutableValue and fill it 
without ever changing the reference.

  was:
This is mostly a Lucene change that affects some Solr code, so I made the issue 
here. If the issue needs to also be made in Lucene, that can be done. 

Overall this adds the functionality to FunctionValues so that they can fill a 
given MutableValue. This allows functions that have an input and output of the 
same type, like IF, to have generic ValueSources without the need for 
individual sources for every type. This change also gives the ability to make 
ValueFillers for given MutableValues. Therefore MutableValues don't need to be 
created for every ValueFiller and can be re-used. 


> Adding functionality to FunctionValues to support filling external 
> MutableValues and having multiple ValueFillers.
> --
>
> Key: SOLR-7761
> URL: https://issues.apache.org/jira/browse/SOLR-7761
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.10.4
>Reporter: Houston Putman
>Priority: Minor
>  Labels: patch
>
> This is mostly a Lucene change that affects some Solr code, so I made the 
> issue here. If the issue needs to also be made in Lucene, that can be done. 
> Overall this adds the functionality to FunctionValues so that they can fill a 
> given MutableValue. This allows functions that have an input and output of 
> the same type, like IF, to have generic ValueSources without the need for 
> individual sources for every type. This change also gives the ability to make 
> ValueFillers for given MutableValues. Therefore MutableValues don't need to 
> be created for every ValueFiller and can be re-used. 
> Originally this change was made in order to increase performance by recycling 
> MutbaleValues. So that one could keep track of a MutableValue and fill it 
> without ever changing the reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7761) Adding functionality to FunctionValues to support filling external MutableValues and having multiple ValueFillers.

2015-07-06 Thread Houston Putman (JIRA)
Houston Putman created SOLR-7761:


 Summary: Adding functionality to FunctionValues to support filling 
external MutableValues and having multiple ValueFillers.
 Key: SOLR-7761
 URL: https://issues.apache.org/jira/browse/SOLR-7761
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.10.4
Reporter: Houston Putman
Priority: Minor


This is mostly a Lucene change that affects some Solr code, so I made the issue 
here. If the issue needs to also be made in Lucene, that can be done. 

Overall this adds the functionality to FunctionValues so that they can fill a 
given MutableValue. This allows functions that have an input and output of the 
same type, like IF, to have generic ValueSources without the need for 
individual sources for every type. This change also gives the ability to make 
ValueFillers for given MutableValues. Therefore MutableValues don't need to be 
created for every ValueFiller and can be re-used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6563) MockFileSystemTestCase.testURI should be improved to handle cases where OS/JVM cannot create non-ASCII filenames

2015-07-06 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615509#comment-14615509
 ] 

Christine Poerschke commented on LUCENE-6563:
-

The {{boolean tryResolve}} flag was aiming to preserve existing logic i.e. not 
catching any {{InvalidPathException}} that {{dir.resolve("file1");}} might 
throw. Happy to remove both it and the existing {{Charset.defaultCharset()}} in 
favour of just try/catch-ing on the resolve.

> MockFileSystemTestCase.testURI should be improved to handle cases where 
> OS/JVM cannot create non-ASCII filenames
> 
>
> Key: LUCENE-6563
> URL: https://issues.apache.org/jira/browse/LUCENE-6563
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Dawid Weiss
>Priority: Minor
>
> {{ant test -Dtestcase=TestVerboseFS -Dtests.method=testURI 
> -Dtests.file.encoding=UTF-8}} fails (for example) with 'Oracle Corporation 
> 1.8.0_45 (64-bit)' when the default {{sun.jnu.encoding}} system property is 
> (for example) {{ANSI_X3.4-1968}}
> [details to follow]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6563) MockFileSystemTestCase.testURI should be improved to handle cases where OS/JVM cannot create non-ASCII filenames

2015-07-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615488#comment-14615488
 ] 

Uwe Schindler commented on LUCENE-6563:
---

Also I think we should remove the true/false parameter. ASCII should always 
pass, so why add condition?

> MockFileSystemTestCase.testURI should be improved to handle cases where 
> OS/JVM cannot create non-ASCII filenames
> 
>
> Key: LUCENE-6563
> URL: https://issues.apache.org/jira/browse/LUCENE-6563
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Dawid Weiss
>Priority: Minor
>
> {{ant test -Dtestcase=TestVerboseFS -Dtests.method=testURI 
> -Dtests.file.encoding=UTF-8}} fails (for example) with 'Oracle Corporation 
> 1.8.0_45 (64-bit)' when the default {{sun.jnu.encoding}} system property is 
> (for example) {{ANSI_X3.4-1968}}
> [details to follow]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6563) MockFileSystemTestCase.testURI should be improved to handle cases where OS/JVM cannot create non-ASCII filenames

2015-07-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615487#comment-14615487
 ] 

Uwe Schindler commented on LUCENE-6563:
---

bq. Don't want to be picky, but Charset.defaultCharset() isn't exactly what's 
used for filename encoding... most of the time it will be the same thing though 
so I think it's still an improvement.

this is why I said:

bq. ...but I think with that we can remove the assume for chinese - bcause its 
subsumed by the assumeNoException!?

So I think we should just put the resolve into try/catch and if this fails, 
cancel test.

> MockFileSystemTestCase.testURI should be improved to handle cases where 
> OS/JVM cannot create non-ASCII filenames
> 
>
> Key: LUCENE-6563
> URL: https://issues.apache.org/jira/browse/LUCENE-6563
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Dawid Weiss
>Priority: Minor
>
> {{ant test -Dtestcase=TestVerboseFS -Dtests.method=testURI 
> -Dtests.file.encoding=UTF-8}} fails (for example) with 'Oracle Corporation 
> 1.8.0_45 (64-bit)' when the default {{sun.jnu.encoding}} system property is 
> (for example) {{ANSI_X3.4-1968}}
> [details to follow]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6563) MockFileSystemTestCase.testURI should be improved to handle cases where OS/JVM cannot create non-ASCII filenames

2015-07-06 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615483#comment-14615483
 ] 

Dawid Weiss commented on LUCENE-6563:
-

Don't want to be picky, but {{Charset.defaultCharset()}} isn't exactly what's 
used for filename encoding... most of the time it will be the same thing though 
so I think it's still an improvement.

For the record, all of this charset-to-byte related stuff is a legacy 
headache... check out the interesting comments in openjdk if you're interested.
{code}
/* On windows the system locale may be different than the
 * user locale. This is an unsupported configuration, [...]
{code}



> MockFileSystemTestCase.testURI should be improved to handle cases where 
> OS/JVM cannot create non-ASCII filenames
> 
>
> Key: LUCENE-6563
> URL: https://issues.apache.org/jira/browse/LUCENE-6563
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Dawid Weiss
>Priority: Minor
>
> {{ant test -Dtestcase=TestVerboseFS -Dtests.method=testURI 
> -Dtests.file.encoding=UTF-8}} fails (for example) with 'Oracle Corporation 
> 1.8.0_45 (64-bit)' when the default {{sun.jnu.encoding}} system property is 
> (for example) {{ANSI_X3.4-1968}}
> [details to follow]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7760) Fix method and field visibility for UIMAUpdateRequestProcessor and SolrUIMAConfiguration

2015-07-06 Thread Aaron LaBella (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron LaBella updated SOLR-7760:

Description: The methods in 
{{solr/contrib/uima/src/java/org/apache/solr/uima/processor/SolrUIMAConfiguration.java}}
 are not public and they need to be for other code to be able to make use of 
the configuration data, ie: mapped fields.   Likewise, 
{{solr/contrib/uima/src/java/org/apache/solr/uima/processor/UIMAUpdateRequestProcessor.java}}
 does not have an accessor for the SolrUIMAConfiguration object  (was: The 
methods in 
solr/contrib/uima/src/java/org/apache/solr/uima/processor/SolrUIMAConfiguration.java
 are not public and they need to be for other code to be able to make use of 
the configuration data, ie: mapped fields.   Likewise, 
solr/contrib/uima/src/java/org/apache/solr/uima/processor/UIMAUpdateRequestProcessor.java
 does not have an accessor for the SolrUIMAConfiguration object)

> Fix method and field visibility for UIMAUpdateRequestProcessor and 
> SolrUIMAConfiguration
> 
>
> Key: SOLR-7760
> URL: https://issues.apache.org/jira/browse/SOLR-7760
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - UIMA
>Affects Versions: 5x
>Reporter: Aaron LaBella
>Priority: Critical
> Fix For: 5.3
>
> Attachments: SOLR-7760.patch
>
>
> The methods in 
> {{solr/contrib/uima/src/java/org/apache/solr/uima/processor/SolrUIMAConfiguration.java}}
>  are not public and they need to be for other code to be able to make use of 
> the configuration data, ie: mapped fields.   Likewise, 
> {{solr/contrib/uima/src/java/org/apache/solr/uima/processor/UIMAUpdateRequestProcessor.java}}
>  does not have an accessor for the SolrUIMAConfiguration object



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6365) Optimized iteration of finite strings

2015-07-06 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-6365.
-
Resolution: Fixed

> Optimized iteration of finite strings
> -
>
> Key: LUCENE-6365
> URL: https://issues.apache.org/jira/browse/LUCENE-6365
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: 5.0
>Reporter: Markus Heiden
>Priority: Minor
>  Labels: patch, performance
> Fix For: 5.3, Trunk
>
> Attachments: FiniteStrings_noreuse.patch, FiniteStrings_reuse.patch, 
> LUCENE-6365.patch
>
>
> Replaced Operations.getFiniteStrings() by an optimized FiniteStringIterator.
> Benefits:
> Avoid huge hash set of finite strings.
> Avoid massive object/array creation during processing.
> "Downside":
> Iteration order changed, so when iterating with a limit, the result may 
> differ slightly. Old: emit current node, if accept / recurse. New: recurse / 
> emit current node, if accept.
> The old method Operations.getFiniteStrings() still exists, because it eases 
> the tests. It is now implemented by use of the new FiniteStringIterator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7760) Fix method and field visibility for UIMAUpdateRequestProcessor and SolrUIMAConfiguration

2015-07-06 Thread Aaron LaBella (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615457#comment-14615457
 ] 

Aaron LaBella commented on SOLR-7760:
-

Can someone with write access to Solr repository please review, apply and 
commit the patch above?  I'd like to see this in the next version of Solr.  
Thanks.

> Fix method and field visibility for UIMAUpdateRequestProcessor and 
> SolrUIMAConfiguration
> 
>
> Key: SOLR-7760
> URL: https://issues.apache.org/jira/browse/SOLR-7760
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - UIMA
>Affects Versions: 5x
>Reporter: Aaron LaBella
>Priority: Critical
> Fix For: 5.3
>
> Attachments: SOLR-7760.patch
>
>
> The methods in 
> solr/contrib/uima/src/java/org/apache/solr/uima/processor/SolrUIMAConfiguration.java
>  are not public and they need to be for other code to be able to make use of 
> the configuration data, ie: mapped fields.   Likewise, 
> solr/contrib/uima/src/java/org/apache/solr/uima/processor/UIMAUpdateRequestProcessor.java
>  does not have an accessor for the SolrUIMAConfiguration object



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7760) Fix method and field visibility for UIMAUpdateRequestProcessor and SolrUIMAConfiguration

2015-07-06 Thread Aaron LaBella (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron LaBella updated SOLR-7760:

Attachment: SOLR-7760.patch

patch per issue description

> Fix method and field visibility for UIMAUpdateRequestProcessor and 
> SolrUIMAConfiguration
> 
>
> Key: SOLR-7760
> URL: https://issues.apache.org/jira/browse/SOLR-7760
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - UIMA
>Affects Versions: 5x
>Reporter: Aaron LaBella
>Priority: Critical
> Fix For: 5.3
>
> Attachments: SOLR-7760.patch
>
>
> The methods in 
> solr/contrib/uima/src/java/org/apache/solr/uima/processor/SolrUIMAConfiguration.java
>  are not public and they need to be for other code to be able to make use of 
> the configuration data, ie: mapped fields.   Likewise, 
> solr/contrib/uima/src/java/org/apache/solr/uima/processor/UIMAUpdateRequestProcessor.java
>  does not have an accessor for the SolrUIMAConfiguration object



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6365) Optimized iteration of finite strings

2015-07-06 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615453#comment-14615453
 ] 

Dawid Weiss edited comment on LUCENE-6365 at 7/6/15 6:38 PM:
-

Thanks Mike, thanks Markus.


was (Author: dweiss):
Thanks Mike.

> Optimized iteration of finite strings
> -
>
> Key: LUCENE-6365
> URL: https://issues.apache.org/jira/browse/LUCENE-6365
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: 5.0
>Reporter: Markus Heiden
>Priority: Minor
>  Labels: patch, performance
> Fix For: 5.3, Trunk
>
> Attachments: FiniteStrings_noreuse.patch, FiniteStrings_reuse.patch, 
> LUCENE-6365.patch
>
>
> Replaced Operations.getFiniteStrings() by an optimized FiniteStringIterator.
> Benefits:
> Avoid huge hash set of finite strings.
> Avoid massive object/array creation during processing.
> "Downside":
> Iteration order changed, so when iterating with a limit, the result may 
> differ slightly. Old: emit current node, if accept / recurse. New: recurse / 
> emit current node, if accept.
> The old method Operations.getFiniteStrings() still exists, because it eases 
> the tests. It is now implemented by use of the new FiniteStringIterator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6365) Optimized iteration of finite strings

2015-07-06 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615453#comment-14615453
 ] 

Dawid Weiss commented on LUCENE-6365:
-

Thanks Mike.

> Optimized iteration of finite strings
> -
>
> Key: LUCENE-6365
> URL: https://issues.apache.org/jira/browse/LUCENE-6365
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: 5.0
>Reporter: Markus Heiden
>Priority: Minor
>  Labels: patch, performance
> Fix For: 5.3, Trunk
>
> Attachments: FiniteStrings_noreuse.patch, FiniteStrings_reuse.patch, 
> LUCENE-6365.patch
>
>
> Replaced Operations.getFiniteStrings() by an optimized FiniteStringIterator.
> Benefits:
> Avoid huge hash set of finite strings.
> Avoid massive object/array creation during processing.
> "Downside":
> Iteration order changed, so when iterating with a limit, the result may 
> differ slightly. Old: emit current node, if accept / recurse. New: recurse / 
> emit current node, if accept.
> The old method Operations.getFiniteStrings() still exists, because it eases 
> the tests. It is now implemented by use of the new FiniteStringIterator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7760) Fix method and field visibility for UIMAUpdateRequestProcessor and SolrUIMAConfiguration

2015-07-06 Thread Aaron LaBella (JIRA)
Aaron LaBella created SOLR-7760:
---

 Summary: Fix method and field visibility for 
UIMAUpdateRequestProcessor and SolrUIMAConfiguration
 Key: SOLR-7760
 URL: https://issues.apache.org/jira/browse/SOLR-7760
 Project: Solr
  Issue Type: Improvement
  Components: contrib - UIMA
Affects Versions: 5x
Reporter: Aaron LaBella
Priority: Critical
 Fix For: 5.3


The methods in 
solr/contrib/uima/src/java/org/apache/solr/uima/processor/SolrUIMAConfiguration.java
 are not public and they need to be for other code to be able to make use of 
the configuration data, ie: mapped fields.   Likewise, 
solr/contrib/uima/src/java/org/apache/solr/uima/processor/UIMAUpdateRequestProcessor.java
 does not have an accessor for the SolrUIMAConfiguration object



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1481: POMs out of sync

2015-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1481/

No tests ran.

Build Log:
[...truncated 27304 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:542: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:193: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/build.xml:412:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:2156:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:1650:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:570:
 Error deploying artifact 'org.apache.lucene:lucene-sandbox:jar': Error 
deploying artifact: Failed to transfer file: 
https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-sandbox/6.0.0-SNAPSHOT/lucene-sandbox-6.0.0-20150706.182430-263-javadoc.jar.
 Return code is: 502

Total time: 49 minutes 49 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (LUCENE-6639) LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first scorer is skipped

2015-07-06 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6639.
--
   Resolution: Fixed
Fix Version/s: 5.3

> LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first 
> scorer is skipped
> 
>
> Key: LUCENE-6639
> URL: https://issues.apache.org/jira/browse/LUCENE-6639
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Terry Smith
>Priority: Minor
> Fix For: 5.3
>
> Attachments: LUCENE-6639.patch
>
>
> The method 
> {{org.apache.lucene.search.LRUQueryCache.CachingWrapperWeight.scorer(LeafReaderContext)}}
>  starts with
> {code}
> if (context.ord == 0) {
> policy.onUse(getQuery());
> }
> {code}
> which can result in a missed call for queries that return a null scorer for 
> the first segment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6639) LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first scorer is skipped

2015-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615386#comment-14615386
 ] 

ASF subversion and git services commented on LUCENE-6639:
-

Commit 1689470 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1689470 ]

LUCENE-6639: Make LRUQueryCache consider a query as used on the first time a 
Scorer is pulled.

> LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first 
> scorer is skipped
> 
>
> Key: LUCENE-6639
> URL: https://issues.apache.org/jira/browse/LUCENE-6639
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Terry Smith
>Priority: Minor
> Fix For: 5.3
>
> Attachments: LUCENE-6639.patch
>
>
> The method 
> {{org.apache.lucene.search.LRUQueryCache.CachingWrapperWeight.scorer(LeafReaderContext)}}
>  starts with
> {code}
> if (context.ord == 0) {
> policy.onUse(getQuery());
> }
> {code}
> which can result in a missed call for queries that return a null scorer for 
> the first segment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6639) LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first scorer is skipped

2015-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615359#comment-14615359
 ] 

ASF subversion and git services commented on LUCENE-6639:
-

Commit 1689464 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1689464 ]

LUCENE-6639: Make LRUQueryCache consider a query as used on the first time a 
Scorer is pulled.

> LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first 
> scorer is skipped
> 
>
> Key: LUCENE-6639
> URL: https://issues.apache.org/jira/browse/LUCENE-6639
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Terry Smith
>Priority: Minor
> Attachments: LUCENE-6639.patch
>
>
> The method 
> {{org.apache.lucene.search.LRUQueryCache.CachingWrapperWeight.scorer(LeafReaderContext)}}
>  starts with
> {code}
> if (context.ord == 0) {
> policy.onUse(getQuery());
> }
> {code}
> which can result in a missed call for queries that return a null scorer for 
> the first segment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 894 - Still Failing

2015-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/894/

2 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=13481, name=collection4, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=13481, name=collection4, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:35004/zges/d: collection already exists: 
awholynewstresscollection_collection4_1
at __randomizedtesting.SeedInfo.seed([2ACE3E258699A390]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1572)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1593)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:887)


FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload

Error Message:
expected:<[{indexVersion=1436202503821,generation=2,filelist=[_e3.cfe, _e3.cfs, 
_e3.si, _e4.fdt, _e4.fdx, _e4.fnm, _e4.nvd, _e4.nvm, _e4.si, 
_e4_FSTOrd50_0.doc, _e4_FSTOrd50_0.tbk, _e4_FSTOrd50_0.tix, _e6.fdt, _e6.fdx, 
_e6.fnm, _e6.nvd, _e6.nvm, _e6.si, _e6_FSTOrd50_0.doc, _e6_FSTOrd50_0.tbk, 
_e6_FSTOrd50_0.tix, _eb.cfe, _eb.cfs, _eb.si, _ec.fdt, _ec.fdx, _ec.fnm, 
_ec.nvd, _ec.nvm, _ec.si, _ec_FSTOrd50_0.doc, _ec_FSTOrd50_0.tbk, 
_ec_FSTOrd50_0.tix, _ee.fdt, _ee.fdx, _ee.fnm, _ee.nvd, _ee.nvm, _ee.si, 
_ee_FSTOrd50_0.doc, _ee_FSTOrd50_0.tbk, _ee_FSTOrd50_0.tix, _ef.fdt, _ef.fdx, 
_ef.fnm, _ef.nvd, _ef.nvm, _ef.si, _ef_FSTOrd50_0.doc, _ef_FSTOrd50_0.tbk, 
_ef_FSTOrd50_0.tix, _eg.cfe, _eg.cfs, _eg.si, _eo.fdt, _eo.fdx, _eo.fnm, 
_eo.nvd, _eo.nvm, _eo.si, _eo_FSTOrd50_0.doc, _eo_FSTOrd50_0.tbk, 
_eo_FSTOrd50_0.tix, _ep.fdt, _ep.fdx, _ep.fnm, _ep.nvd, _ep.nvm, _ep.si, 
_ep_FSTOrd50_0.doc, _ep_FSTOrd50_0.tbk, _ep_FSTOrd50_0.tix, segments_2]}]> but 
was:<[{indexVersion=1436202503821,generation=2,filelist=[_e3.cfe, _e3.cfs, 
_e3.si, _e4.fdt, _e4.fdx, _e4.fnm, _e4.nvd, _e4.nvm, _e4.si, 
_e4_FSTOrd50_0.doc, _e4_FSTOrd50_0.tbk, _e4_FSTOrd50_0.tix, _e6.fdt, _e6.fdx, 
_e6.fnm, _e6.nvd, _e6.nvm, _e6.si, _e6_FSTOrd50_0.doc, _e6_FSTOrd50_0.tbk, 
_e6_FSTOrd50_0.tix, _eb.cfe, _eb.cfs, _eb.si, _ec.fdt, _ec.fdx, _ec.fnm, 
_ec.nvd, _ec.nvm, _ec.si, _ec_FSTOrd50_0.doc, _ec_FSTOrd50_0.tbk, 
_ec_FSTOrd50_0.tix, _ee.fdt, _ee.fdx, _ee.fnm, _ee.nvd, _ee.nvm, _ee.si, 
_ee_FSTOrd50_0.doc, _ee_FSTOrd50_0.tbk, _ee_FSTOrd50_0.tix, _ef.fdt, _ef.fdx, 
_ef.fnm, _ef.nvd, _ef.nvm, _ef.si, _ef_FSTOrd50_0.doc, _ef_FSTOrd50_0.tbk, 
_ef_FSTOrd50_0.tix, _eg.cfe, _eg.cfs, _eg.si, _eo.fdt, _eo.fdx, _eo.fnm, 
_eo.nvd, _eo.nvm, _eo.si, _eo_FSTOrd50_0.doc, _eo_FSTOrd50_0.tbk, 
_eo_FSTOrd50_0.tix, _ep.fdt, _ep.fdx, _ep.fnm, _ep.nvd, _ep.nvm, _ep.si, 
_ep_FSTOrd50_0.doc, _ep_FSTOrd50_0.tbk, _ep_FSTOrd50_0.tix, segments_2]}, 
{indexVersion=1436202503821,generation=3,filelist=[_ei.cfe, _ei.cfs, _ei.si, 
_eo.fdt, _eo.fdx, _eo.fnm, _eo.nvd, _eo.nvm, _eo.si, _eo_FSTOrd50_0.doc, 
_eo_FSTOrd50_0.tbk, _eo_FSTOrd50_0.tix, _ep.fdt, _ep.fdx, _ep.fnm, _ep.nvd, 
_ep.nvm, _ep.si, _ep_FSTOrd50_0.doc, _ep_FSTOrd50_0.tbk, _ep_FSTOrd50_0.tix, 
segments_3]}]>

Stack Trace:
java.lang.AssertionError: 
expected:<[{indexVersion=1436202503821,generation=2,filelist=[_e3.cfe, _e3.cfs, 
_e3.si, _e4.fdt, _e4.fdx, _e4.fnm, _e4.nvd, _e4.nvm, _e4.si, 
_e4_FSTOrd50_0.doc, _e4_FSTOrd50_0.tbk, _e4_FSTOrd50_0.tix, _e6.fdt, _e6.fdx, 
_e6.fnm, _e6.nvd, _e6.nvm, _e6.si, _e6_FSTOrd50_0.doc, _e6_FSTOrd50_0.tbk, 
_e6_FSTOrd50_0.tix, _eb.cfe, _eb.cfs, _eb.si, _ec.fdt, _ec.fdx, _ec.fnm, 
_ec.nvd, _ec.nvm, _ec.si, _ec_FSTOrd50_0.doc, _ec_FSTOrd50_0.tbk, 
_ec_FSTOrd50_0.tix, _ee.fdt, _ee.fdx, _ee.fnm, _ee.nvd, _ee.nvm, _ee.si, 
_ee_FSTOrd50_0.doc, _ee_FSTOrd50_0.tbk, _ee_FSTOrd50_0

[jira] [Resolved] (LUCENE-6649) Remove dependency of lucene/join on oal.search.Filter

2015-07-06 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6649.
--
   Resolution: Fixed
Fix Version/s: 5.3

> Remove dependency of lucene/join on oal.search.Filter
> -
>
> Key: LUCENE-6649
> URL: https://issues.apache.org/jira/browse/LUCENE-6649
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Fix For: 5.3
>
> Attachments: LUCENE-6649.patch, LUCENE-6649.patch
>
>
> Similarly to other modules, lucene/join should not use Filter anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6649) Remove dependency of lucene/join on oal.search.Filter

2015-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615337#comment-14615337
 ] 

ASF subversion and git services commented on LUCENE-6649:
-

Commit 1689462 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1689462 ]

LUCENE-6649: Remove dependency of lucene/join on Filter.

> Remove dependency of lucene/join on oal.search.Filter
> -
>
> Key: LUCENE-6649
> URL: https://issues.apache.org/jira/browse/LUCENE-6649
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Fix For: 5.3
>
> Attachments: LUCENE-6649.patch, LUCENE-6649.patch
>
>
> Similarly to other modules, lucene/join should not use Filter anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7143) MoreLikeThis Query Parser does not handle multiple field names

2015-07-06 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615313#comment-14615313
 ] 

Anshum Gupta commented on SOLR-7143:


Should be in 5.3. I'm pretty close to committing this. Just wanted to run a few 
tests before I did that.

> MoreLikeThis Query Parser does not handle multiple field names
> --
>
> Key: SOLR-7143
> URL: https://issues.apache.org/jira/browse/SOLR-7143
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 5.0
>Reporter: Jens Wille
>Assignee: Anshum Gupta
> Attachments: SOLR-7143.patch, SOLR-7143.patch, SOLR-7143.patch, 
> SOLR-7143.patch, SOLR-7143.patch
>
>
> The newly introduced MoreLikeThis Query Parser (SOLR-6248) does not return 
> any results when supplied with multiple fields in the {{qf}} parameter.
> To reproduce within the techproducts example, compare:
> {code}
> curl 
> 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name%7DMA147LL/A'
> curl 
> 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=features%7DMA147LL/A'
> curl 
> 'http://localhost:8983/solr/techproducts/select?q=%7B!mlt+qf=name,features%7DMA147LL/A'
> {code}
> The first two queries return 8 and 5 results, respectively. The third query 
> doesn't return any results (not even the matched document).
> In contrast, the MoreLikeThis Handler works as expected (accounting for the 
> default {{mintf}} and {{mindf}} values in SimpleMLTQParser):
> {code}
> curl 
> 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/A&mlt.fl=name&mlt.mintf=1&mlt.mindf=1'
> curl 
> 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/A&mlt.fl=features&mlt.mintf=1&mlt.mindf=1'
> curl 
> 'http://localhost:8983/solr/techproducts/mlt?q=id:MA147LL/A&mlt.fl=name,features&mlt.mintf=1&mlt.mindf=1'
> {code}
> After adding the following line to 
> {{example/techproducts/solr/techproducts/conf/solrconfig.xml}}:
> {code:language=XML}
> 
> {code}
> The first two queries return 7 and 4 results, respectively (excluding the 
> matched document). The third query returns 7 results, as one would expect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Fix misleading reference of Uninvert fie...

2015-07-06 Thread grossws
GitHub user grossws opened a pull request:

https://github.com/apache/lucene-solr/pull/183

Fix misleading reference of Uninvert field

docsWithField referenced on L276 without `this` in 
`FieldCacheImpl.Uninvert#uninvert(...)` is object field but several lines below 
variable with same name is defined and same field is used with `this`. It's 
quite misleading when reading code.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/grossws/lucene-solr patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/183.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #183


commit d4e397092fd444268bc50b063384013fa22e27c0
Author: grossws 
Date:   2015-07-06T16:58:01Z

Fix misleading reference of Uninvert field

docsWithField referenced on L276 without `this` in 
`FieldCacheImpl.Uninvert#uninvert(...)` is object field but several lines below 
variable with same name is defined and same field is used with `this`. It's 
quite misleading when reading code.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6563) MockFileSystemTestCase.testURI should be improved to handle cases where OS/JVM cannot create non-ASCII filenames

2015-07-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615276#comment-14615276
 ] 

Uwe Schindler commented on LUCENE-6563:
---

I like this test more. Maybe [~rcmuir] has a look, too. The assumeNoException 
is fine, but I think with that we can remove the assume for chinese - bcause 
its subsumed by the assumeNoException!?

> MockFileSystemTestCase.testURI should be improved to handle cases where 
> OS/JVM cannot create non-ASCII filenames
> 
>
> Key: LUCENE-6563
> URL: https://issues.apache.org/jira/browse/LUCENE-6563
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Dawid Weiss
>Priority: Minor
>
> {{ant test -Dtestcase=TestVerboseFS -Dtests.method=testURI 
> -Dtests.file.encoding=UTF-8}} fails (for example) with 'Oracle Corporation 
> 1.8.0_45 (64-bit)' when the default {{sun.jnu.encoding}} system property is 
> (for example) {{ANSI_X3.4-1968}}
> [details to follow]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7759) DebugComponent's explain should be implemented as a distributed query

2015-07-06 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-7759:
---

 Summary: DebugComponent's explain should be implemented as a 
distributed query
 Key: SOLR-7759
 URL: https://issues.apache.org/jira/browse/SOLR-7759
 Project: Solr
  Issue Type: Bug
Reporter: Varun Thacker


Currently when we use debugQuery to see the explanation of the matched 
documents, the query fired to get the statistics for the matched documents is 
not a distributed query.

This is a problem when using distributed idf. The actual documents are being 
scored using the global stats and not per shard stats , but the explain will 
show us the score by taking into account the stats from the shard where the 
document belongs to.

We should try to implement the explain query as a distributed request so that 
the scores match the actual document scores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7611) TestSearcherReuse failure

2015-07-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615268#comment-14615268
 ] 

Hoss Man commented on SOLR-7611:


For context, the original issue where this test was added (and the feature it 
was trying to test) are SOLR-5783

> TestSearcherReuse failure
> -
>
> Key: SOLR-7611
> URL: https://issues.apache.org/jira/browse/SOLR-7611
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2
>Reporter: Steve Rowe
> Attachments: SOLR-7611_test.patch, typescript
>
>
> {noformat}
>[junit4] FAILURE 0.94s | TestSearcherReuse.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected 
> same: main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(5.2.0):C3)
>  Uninverting(_2(5.2.0):c2)))}> was not: main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(5.2.0):C3)
>  Uninverting(_2(5.2.0):c2)))}>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([F1A11DF972B907D6:79F52223DC456A2E]:0)
>[junit4]>  at 
> org.apache.solr.search.TestSearcherReuse.assertSearcherHasNotChanged(TestSearcherReuse.java:247)
>[junit4]>  at 
> org.apache.solr.search.TestSearcherReuse.test(TestSearcherReuse.java:104)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Reproduces for me on the 5.2 release branch with the following - note that 
> both {{-Dtests.multiplier=2}} and {{-Dtests.nightly=true}} are required to 
> reproduce:
> {noformat}
> ant test  -Dtestcase=TestSearcherReuse -Dtests.seed=F1A11DF972B907D6 
> -Dtests.multiplier=2 -Dtests.nightly=true
> {noformat}
> Full log:
> {noformat}
>[junit4]  says hallo! Master seed: F1A11DF972B907D6
>[junit4] Executing 1 suite with 1 JVM.
>[junit4] 
>[junit4] Started J0 PID(776@smb.local).
>[junit4] Suite: org.apache.solr.search.TestSearcherReuse
>[junit4]   2> log4j:WARN No such property [conversionPattern] in 
> org.apache.solr.util.SolrLogLayout.
>[junit4]   2> Creating dataDir: 
> /Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
>  F1A11DF972B907D6-002/init-core-data-001
>[junit4]   2> 889 T11 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
> (false) and clientAuth (false)
>[junit4]   2> 959 T11 oas.SolrTestCaseJ4.initCore initCore
>[junit4]   2> 1093 T11 oasc.SolrResourceLoader. new 
> SolrResourceLoader for directory: 
> '/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
>  F1A11DF972B907D6-002/tempDir-001/collection1/'
>[junit4]   2> 1390 T11 oasc.SolrConfig.refreshRequestParams current 
> version of requestparams : -1
>[junit4]   2> 1449 T11 oasc.SolrConfig. Using Lucene MatchVersion: 
> 5.2.0
>[junit4]   2> 1551 T11 oasc.SolrConfig. Loaded SolrConfig: 
> solrconfig-managed-schema.xml
>[junit4]   2> 1563 T11 oass.ManagedIndexSchemaFactory.readSchemaLocally 
> The schema is configured as managed, but managed schema resource 
> managed-schema not found - loading non-managed schema 
> schema-id-and-version-fields-only.xml instead
>[junit4]   2> 1580 T11 oass.IndexSchema.readSchema Reading Solr Schema 
> from 
> /Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
>  
> F1A11DF972B907D6-002/tempDir-001/collection1/conf/schema-id-and-version-fields-only.xml
>[junit4]   2> 1594 T11 oass.IndexSchema.readSchema [null] Schema 
> name=id-and-version-fields-only
>[junit4]   2> 1676 T11 oass.IndexSchema.readSchema unique key field: id
>[junit4]   2> 1706 T11 oass.ManagedIndexSchema.persistManagedSchema 
> Upgraded to managed schema at 
> /Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
>  F1A11DF972B907D6-002/tempDir-001/collection1/conf/managed-schema
>[junit4]   2> 1709 T11 
> oass.ManagedIndexSchemaFactory.upgradeToManagedSchema After upgrading to 
> managed schema, renamed the non-managed schema 
> /Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
>  
> F1A11DF972B907D6-002/tempDir-001/collection1/conf/schema-id-and-version-fields-only.xml
>  to 
> /Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
>  
> F1A11DF972B907D6-002/tempDir-001/collection1/conf/schema-id-and-version-fields-only.xml.bak
>[junit4]   2> 1714 T11 oasc.SolrResourceLoader.locateSolrHome JNDI not 
> configured for solr (NoInitialContextEx)
>[junit4]   2> 1715 T11 oasc.SolrResourceLoader.locateSolrHome using system 
> property solr.solr.home: 
> /Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/

[jira] [Commented] (SOLR-7611) TestSearcherReuse failure

2015-07-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615266#comment-14615266
 ] 

Hoss Man commented on SOLR-7611:


reply i sent to a jenkins fail earlier today, posting here as well for 
permemenant record...

{quote}
FWIW: Steve looked in to this a bit ago and filed SOLR-7611...

https://issues.apache.org/jira/browse/SOLR-7611

...my impression at the time was that LUCENE-6505 totally invalidated the 
entire premise of the test, but i didn't spend that much time looking into 
it.  But then steve said he was able to reproduce some failures even after 
he rolled back LUCENE-6505 -- which left me more confused.

I honestly have no idea what's going on and haven't really had time to 
think about it any more.

I suspect that there may be 2 unrelated problems here that exhibit the 
same symptoms...

1) something that's been broken a while that causes some seeds to fail.

2) mike's change in LUCENE-6505 which (seems to) eliminate the point of 
the feature being tested here and as a result changed the test in a way 
that may by making #1 happen more often (ie: with more seeds)
{quote}

> TestSearcherReuse failure
> -
>
> Key: SOLR-7611
> URL: https://issues.apache.org/jira/browse/SOLR-7611
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2
>Reporter: Steve Rowe
> Attachments: SOLR-7611_test.patch, typescript
>
>
> {noformat}
>[junit4] FAILURE 0.94s | TestSearcherReuse.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected 
> same: main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(5.2.0):C3)
>  Uninverting(_2(5.2.0):c2)))}> was not: main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(5.2.0):C3)
>  Uninverting(_2(5.2.0):c2)))}>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([F1A11DF972B907D6:79F52223DC456A2E]:0)
>[junit4]>  at 
> org.apache.solr.search.TestSearcherReuse.assertSearcherHasNotChanged(TestSearcherReuse.java:247)
>[junit4]>  at 
> org.apache.solr.search.TestSearcherReuse.test(TestSearcherReuse.java:104)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Reproduces for me on the 5.2 release branch with the following - note that 
> both {{-Dtests.multiplier=2}} and {{-Dtests.nightly=true}} are required to 
> reproduce:
> {noformat}
> ant test  -Dtestcase=TestSearcherReuse -Dtests.seed=F1A11DF972B907D6 
> -Dtests.multiplier=2 -Dtests.nightly=true
> {noformat}
> Full log:
> {noformat}
>[junit4]  says hallo! Master seed: F1A11DF972B907D6
>[junit4] Executing 1 suite with 1 JVM.
>[junit4] 
>[junit4] Started J0 PID(776@smb.local).
>[junit4] Suite: org.apache.solr.search.TestSearcherReuse
>[junit4]   2> log4j:WARN No such property [conversionPattern] in 
> org.apache.solr.util.SolrLogLayout.
>[junit4]   2> Creating dataDir: 
> /Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
>  F1A11DF972B907D6-002/init-core-data-001
>[junit4]   2> 889 T11 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
> (false) and clientAuth (false)
>[junit4]   2> 959 T11 oas.SolrTestCaseJ4.initCore initCore
>[junit4]   2> 1093 T11 oasc.SolrResourceLoader. new 
> SolrResourceLoader for directory: 
> '/Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
>  F1A11DF972B907D6-002/tempDir-001/collection1/'
>[junit4]   2> 1390 T11 oasc.SolrConfig.refreshRequestParams current 
> version of requestparams : -1
>[junit4]   2> 1449 T11 oasc.SolrConfig. Using Lucene MatchVersion: 
> 5.2.0
>[junit4]   2> 1551 T11 oasc.SolrConfig. Loaded SolrConfig: 
> solrconfig-managed-schema.xml
>[junit4]   2> 1563 T11 oass.ManagedIndexSchemaFactory.readSchemaLocally 
> The schema is configured as managed, but managed schema resource 
> managed-schema not found - loading non-managed schema 
> schema-id-and-version-fields-only.xml instead
>[junit4]   2> 1580 T11 oass.IndexSchema.readSchema Reading Solr Schema 
> from 
> /Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
>  
> F1A11DF972B907D6-002/tempDir-001/collection1/conf/schema-id-and-version-fields-only.xml
>[junit4]   2> 1594 T11 oass.IndexSchema.readSchema [null] Schema 
> name=id-and-version-fields-only
>[junit4]   2> 1676 T11 oass.IndexSchema.readSchema unique key field: id
>[junit4]   2> 1706 T11 oass.ManagedIndexSchema.persistManagedSchema 
> Upgraded to managed schema at 
> /Users/sarowe/svn/lucene/dev/branches/lucene_solr_5_2/solr/build/solr-core/test/J0/temp/solr.search.TestSearcherReuse
>  F1A11DF972B907D6-002/tempDir-001/collection1/conf/managed-schema
>[junit4]   2

Re: [JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2431 - Failure!

2015-07-06 Thread Chris Hostetter

FWIW: Steve looked in to this a bit ago and filed SOLR-7611...

https://issues.apache.org/jira/browse/SOLR-7611

...my impression at the time was that LUCENE-6505 totally invalidated the 
entire premise of the test, but i didn't spend that much time looking into 
it.  But then stem said he was able to reproduce some failures even after 
he rolled back LUCENE-6505 -- which left me more confused.

I honestly have no idea what's going on and haven't really had time to 
think about it any more.

I suspect that there may be 2 unrelated problems here that exhibit the 
same symptoms...


1) something that's been broken a while that causes some seeds to fail.

2) mike's change in LUCENE-6505 which (seems to) eliminate the point of 
the feature being tested here and as a result changed the test in a way 
that may by making #1 happen more often (ie: with more seeds)





: Date: Sat, 4 Jul 2015 17:50:30 -0400
: From: Michael McCandless 
: Reply-To: dev@lucene.apache.org
: To: Lucene/Solr dev 
: Subject: Re: [JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2431
: - Failure!
: 
: Hmm this is probably from LUCENE-6505 ... I'll dig.
: 
: Mike McCandless
: 
: http://blog.mikemccandless.com
: 
: 
: On Sat, Jul 4, 2015 at 7:42 AM, Policeman Jenkins Server
:  wrote:
: > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2431/
: > Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC
: >
: > 1 tests failed.
: > FAILED:  org.apache.solr.search.TestSearcherReuse.test
: >
: > Error Message:
: > expected same:
 was not:
: >
: > Stack Trace:
: > java.lang.AssertionError: expected same:
 was not:
: > at 
__randomizedtesting.SeedInfo.seed([7F61C8F34E5F031B:F735F729E0A36EE3]:0)
: > at org.junit.Assert.fail(Assert.java:93)
: > at org.junit.Assert.failNotSame(Assert.java:641)
: > at org.junit.Assert.assertSame(Assert.java:580)
: > at org.junit.Assert.assertSame(Assert.java:593)
: > at 
org.apache.solr.search.TestSearcherReuse.assertSearcherHasNotChanged(TestSearcherReuse.java:247)
: > at 
org.apache.solr.search.TestSearcherReuse.test(TestSearcherReuse.java:117)
: > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
: > at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
: > at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
: > at java.lang.reflect.Method.invoke(Method.java:497)
: > at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
: > at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
: > at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
: > at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
: > at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
: > at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
: > at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
: > at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
: > at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
: > at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
: > at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
: > at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
: > at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
: > at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
: > at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
: > at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
: > at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
: > at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
: > at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
: > at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
: > at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
: > at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
: > at 
org.apache.lucene.util.TestRuleStoreCla

[jira] [Commented] (SOLR-7755) An API to edit the Basic Auth security params

2015-07-06 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615243#comment-14615243
 ] 

Noble Paul commented on SOLR-7755:
--

bq.Sure not, but this kind of an interface should only be exposed to an admin, 
not a regular "user".
Admin is a human being. I mean it should be exposed only to a well tested 
program ..

bq.In that case, do you propose that the system assumed a default/preconfigured 
admin user principal?

NO. The system will always start with an empty {{/security.json}} . In that 
case no security is enabled. We will provide users with standard tested startup 
{{security.json}} for each scheme . That will contain a user and role

bq.But in that case, most datastores (MySQL, Oracle comes to mind) have their 
own built-in user management

YES. Solr will have ability to mange users . if you use the 
BasicAuth/ZKBasedAuthc pair. If you want to use other plugins , it will be 
uptto the plugin to decide what is editable and what is not


> An API to edit the Basic Auth security params
> -
>
> Key: SOLR-7755
> URL: https://issues.apache.org/jira/browse/SOLR-7755
> Project: Solr
>  Issue Type: Sub-task
>  Components: security
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> example
> {code}
> curl http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{
> "add-user" : {"name" : "tom", 
>  "role": ["admin","dev"]
>  },
> "create-permission" :{"name":"mycoll-update",
>   "before" :"some-other-permission",
>   "path":"/update/*"
>   "role":["dev","admin"]
>   }
> }'
> {code}
> Please note that the set of parameters required for a basic ZK based impl 
> will be completely different from that of a Kerberos implementation. However 
> the framework would remain the same. The end point will remain the same, 
> though



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7755) An API to edit the Basic Auth security params

2015-07-06 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615233#comment-14615233
 ] 

Ishan Chattopadhyaya edited comment on SOLR-7755 at 7/6/15 4:01 PM:


bq. I don't think exposing ZK to users is a good/safe practice
Sure not, but this kind of an interface should only be exposed to an admin, not 
a regular "user".

bq. That is pretty simple , You start with a standard no-edit permission 
security.json. it will be a part of this ticket . Which gives the admin user 
the privilege to edit the security parameters
In that case, do you propose that the system assumed a default/preconfigured 
admin user principal?


bq. {noformat}
>>Also, authc/authz plugins in an already started up Solr cluster can add 
>> watches to the /security.json in 
>> ZK to monitor changes made through such a command line tool
> NO. We want the the authc/authz plugins to just deal with security instead of 
> screwing up/editing ZK nodes
{noformat}
I meant that a plugin can just add a watch to observe changed values and not 
actually changing anything in ZK. IoW, no plugin should be able to change ZK, 
but if the admin changes something from the commandline tool, these plugins can 
pick things up from the changes in security.json. 

bq. Isn't the same way it is done in all data stores? They give admin 
privileges to to the admin and he can do further edits
But in that case, most datastores (MySQL, Oracle comes to mind) have their own 
built-in user management. In case of Solr, most likely the user principals 
would already be configured using LDAP or kerberos or some external system (in 
the special case of a particular plugin, they can be in ZK too). Each plugin 
would support different operations. Instead of trying to cater to them all in a 
unified endpoint/framework, isn't it cleaner to ask the admin to edit 
/security.json (directly or using any commandline tool)? That way, the plugins 
wouldn't need to hook itself into this API endpoint trying to parse out things 
thrown at it, and instead just know how to the config section passed into it 
through the /security.json. Wdyt?


was (Author: ichattopadhyaya):
bq. I don't think exposing ZK to users is a good/safe practice
Sure not, but this kind of an interface should only be exposed to an admin, not 
a regular "user".

bq. That is pretty simple , You start with a standard no-edit permission 
security.json. it will be a part of this ticket . Which gives the admin user 
the privilege to edit the security parameters
In that case, do you propose that the system assumed a default/preconfigured 
admin user principal?

{noformat}
>>Also, authc/authz plugins in an already started up Solr cluster can add 
>> watches to the /security.json in 
>> ZK to monitor changes made through such a command line tool
> NO. We want the the authc/authz plugins to just deal with security instead of 
> screwing up/editing ZK nodes
{noformat}
I meant that a plugin can just add a watch to observe changed values and not 
actually changing anything in ZK. IoW, no plugin should be able to change ZK, 
but if the admin changes something from the commandline tool, these plugins can 
pick things up from the changes in security.json. 

bq. Isn't the same way it is done in all data stores? They give admin 
privileges to to the admin and he can do further edits
But in that case, most datastores (MySQL, Oracle comes to mind) have their own 
built-in user management. In case of Solr, most likely the user principals 
would already be configured using LDAP or kerberos or some external system (in 
the special case of a particular plugin, they can be in ZK too). Each plugin 
would support different operations. Instead of trying to cater to them all in a 
unified endpoint/framework, isn't it cleaner to ask the admin to edit 
/security.json (directly or using any commandline tool)? That way, the plugins 
wouldn't need to hook itself into this API endpoint trying to parse out things 
thrown at it, and instead just know how to the config section passed into it 
through the /security.json. Wdyt?

> An API to edit the Basic Auth security params
> -
>
> Key: SOLR-7755
> URL: https://issues.apache.org/jira/browse/SOLR-7755
> Project: Solr
>  Issue Type: Sub-task
>  Components: security
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> example
> {code}
> curl http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{
> "add-user" : {"name" : "tom", 
>  "role": ["admin","dev"]
>  },
> "create-permission" :{"name":"mycoll-update",
>   "before" :"some-other-permission",
>   "path":"/update/*"
>   "role":["dev","admin"]
> 

[jira] [Commented] (SOLR-7755) An API to edit the Basic Auth security params

2015-07-06 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615233#comment-14615233
 ] 

Ishan Chattopadhyaya commented on SOLR-7755:


bq. I don't think exposing ZK to users is a good/safe practice
Sure not, but this kind of an interface should only be exposed to an admin, not 
a regular "user".

bq. That is pretty simple , You start with a standard no-edit permission 
security.json. it will be a part of this ticket . Which gives the admin user 
the privilege to edit the security parameters
In that case, do you propose that the system assumed a default/preconfigured 
admin user principal?

{noformat}
>>Also, authc/authz plugins in an already started up Solr cluster can add 
>> watches to the /security.json in 
>> ZK to monitor changes made through such a command line tool
> NO. We want the the authc/authz plugins to just deal with security instead of 
> screwing up/editing ZK nodes
{noformat}
I meant that a plugin can just add a watch to observe changed values and not 
actually changing anything in ZK. IoW, no plugin should be able to change ZK, 
but if the admin changes something from the commandline tool, these plugins can 
pick things up from the changes in security.json. 

bq. Isn't the same way it is done in all data stores? They give admin 
privileges to to the admin and he can do further edits
But in that case, most datastores (MySQL, Oracle comes to mind) have their own 
built-in user management. In case of Solr, most likely the user principals 
would already be configured using LDAP or kerberos or some external system (in 
the special case of a particular plugin, they can be in ZK too). Each plugin 
would support different operations. Instead of trying to cater to them all in a 
unified endpoint/framework, isn't it cleaner to ask the admin to edit 
/security.json (directly or using any commandline tool)? That way, the plugins 
wouldn't need to hook itself into this API endpoint trying to parse out things 
thrown at it, and instead just know how to the config section passed into it 
through the /security.json. Wdyt?

> An API to edit the Basic Auth security params
> -
>
> Key: SOLR-7755
> URL: https://issues.apache.org/jira/browse/SOLR-7755
> Project: Solr
>  Issue Type: Sub-task
>  Components: security
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> example
> {code}
> curl http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{
> "add-user" : {"name" : "tom", 
>  "role": ["admin","dev"]
>  },
> "create-permission" :{"name":"mycoll-update",
>   "before" :"some-other-permission",
>   "path":"/update/*"
>   "role":["dev","admin"]
>   }
> }'
> {code}
> Please note that the set of parameters required for a basic ZK based impl 
> will be completely different from that of a Kerberos implementation. However 
> the framework would remain the same. The end point will remain the same, 
> though



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6639) LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first scorer is skipped

2015-07-06 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615207#comment-14615207
 ] 

Adrien Grand commented on LUCENE-6639:
--

Thanks for the feedback Terry, I'll commit shortly then!

> LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first 
> scorer is skipped
> 
>
> Key: LUCENE-6639
> URL: https://issues.apache.org/jira/browse/LUCENE-6639
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Terry Smith
>Priority: Minor
> Attachments: LUCENE-6639.patch
>
>
> The method 
> {{org.apache.lucene.search.LRUQueryCache.CachingWrapperWeight.scorer(LeafReaderContext)}}
>  starts with
> {code}
> if (context.ord == 0) {
> policy.onUse(getQuery());
> }
> {code}
> which can result in a missed call for queries that return a null scorer for 
> the first segment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6649) Remove dependency of lucene/join on oal.search.Filter

2015-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615201#comment-14615201
 ] 

ASF subversion and git services commented on LUCENE-6649:
-

Commit 1689432 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1689432 ]

LUCENE-6649: Remove dependency of lucene/join on Filter.

> Remove dependency of lucene/join on oal.search.Filter
> -
>
> Key: LUCENE-6649
> URL: https://issues.apache.org/jira/browse/LUCENE-6649
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Attachments: LUCENE-6649.patch, LUCENE-6649.patch
>
>
> Similarly to other modules, lucene/join should not use Filter anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-6660) Assertion fails for ToParentBlockJoinQuery$BlockJoinScorer.nextDoc

2015-07-06 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand closed LUCENE-6660.

Resolution: Invalid

Looking at the failed assertion, it suggests that you have orphan children 
documents in your index. Children and parents are supposed to be indexed in a 
contiguous block with the parent in the end, and ToParentBlockJoinQuery found a 
matching child document so that none of the documents after this child doc are 
a parent document.

Maybe you indexed some blocks of documents without a parent, or deleted a 
parent document without deleting all its children at the same time?

> Assertion fails for ToParentBlockJoinQuery$BlockJoinScorer.nextDoc
> --
>
> Key: LUCENE-6660
> URL: https://issues.apache.org/jira/browse/LUCENE-6660
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.2.1
> Environment: Running Solr 5.2.1 on Windows x64
> java version "1.7.0_51"
> Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
> Java HotSpot(TM) Client VM (build 24.51-b03, mixed mode, sharing)
>Reporter: Christian Danninger
>
> After I enable assertion with "-ea:org.apache..." I got the stack trace 
> below. I checked that the parent filter only match parent documents and the 
> child filter only match child documents. Field "id" is unique.
> 16:55:06,269 ERROR [org.apache.solr.servlet.SolrDispatchFilter] 
> (http-127.0.0.1/127.0.0.1:8080-1) null:java.lang.RuntimeException: 
> java.lang.AssertionError
>   at org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:593)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:465)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:246)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:149)
>   at 
> org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:169)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:145)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:97)
>   at 
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:559)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:102)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:336)
>   at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:856)
>   at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:653)
>   at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:920)
>   at java.lang.Thread.run(Thread.java:744)
> Caused by: java.lang.AssertionError
>   at 
> org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:278)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:204)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:176)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:771)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:485)
>   at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:202)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1666)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1485)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:561)
>   at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:518)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:255)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   ... 16 more
> Without assertions enabled:
> 17:21:39,008 ERROR [org.apache.solr.servlet.SolrDispatchFilter] 
> (http-12

[jira] [Created] (SOLR-7758) cURL-like multiple JSON update ignore nested data structure

2015-07-06 Thread Sergio Schena (JIRA)
Sergio Schena created SOLR-7758:
---

 Summary: cURL-like multiple JSON update ignore nested data 
structure 
 Key: SOLR-7758
 URL: https://issues.apache.org/jira/browse/SOLR-7758
 Project: Solr
  Issue Type: Bug
  Components: Data-driven Schema
Affects Versions: 5.2.1
Reporter: Sergio Schena


I'm trying to upload the following documents to my collection
[
{
"id": "1",
"title": "Let's try Solr1",
"name" : {
"first": "Sergio",
"last": "Schena"
}
},
{
"id": "2",
"title": "Let's try Solr 2",
"name" : {
"first": "Sergio",
"last": "Schena"
}
}
]
using the /solr/collection_name/update API. The data are successfully uploaded 
but the field name.first and name.last are not stored and I cannot retrieve 
them when I search (over all my collection).

I checked the extracted schema and the missing fields are present!

In addition, if I upload only one document using the upload/json/docs the 
fields are stored in the collection.

I think that I discovered a little bug in the multiple upload with nested data 
types.

I didn't try with a schema definition and with the XML data format instead with 
the JSON one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7757) Create a framework to edit/reload security params

2015-07-06 Thread Noble Paul (JIRA)
Noble Paul created SOLR-7757:


 Summary: Create a framework to edit/reload security params
 Key: SOLR-7757
 URL: https://issues.apache.org/jira/browse/SOLR-7757
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul


We should have a standard mechanism which security plugins can use to 
edit/reload etc for various plugins.
This will involve solr watching the {{/security.json}} and giving callbacks to 
the plugins. It wil also create standard end points for Rest-like APIs for each 
plugin. Each plugin will be able to define the payload, verify it, modify the 
config etc 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7755) An API to edit the Basic Auth security params

2015-07-06 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7755:
-
Summary: An API to edit the Basic Auth security params  (was: An API to 
edit the security params)

> An API to edit the Basic Auth security params
> -
>
> Key: SOLR-7755
> URL: https://issues.apache.org/jira/browse/SOLR-7755
> Project: Solr
>  Issue Type: Sub-task
>  Components: security
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> example
> {code}
> curl http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{
> "add-user" : {"name" : "tom", 
>  "role": ["admin","dev"]
>  },
> "create-permission" :{"name":"mycoll-update",
>   "before" :"some-other-permission",
>   "path":"/update/*"
>   "role":["dev","admin"]
>   }
> }'
> {code}
> Please note that the set of parameters required for a basic ZK based impl 
> will be completely different from that of a Kerberos implementation. However 
> the framework would remain the same. The end point will remain the same, 
> though



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7756) NPE in ExactStatsCache when a term doesn't exist on a shard

2015-07-06 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-7756:

Attachment: SOLR-7756.patch

> NPE in ExactStatsCache when a term doesn't exist on a shard
> ---
>
> Key: SOLR-7756
> URL: https://issues.apache.org/jira/browse/SOLR-7756
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Attachments: SOLR-7756.patch
>
>
> If a term doesn't exist on a shard {{ExactStatsCache#getPerShardTermStats}} 
> throws an NullPointerException. 
> Attaching a test and a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6658) IndexUpgrader doesn't upgrade an index if it has zero segments

2015-07-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615175#comment-14615175
 ] 

Uwe Schindler edited comment on LUCENE-6658 at 7/6/15 3:29 PM:
---

[~trejkaz]: This will not be backported to 3.6 (and also release of 4.10.5 is 
very unlikely). If you want to "ensure" than index was upgraded, I'd suggest to 
use the UpgradeIndexMergePolicy directly and then more or less copy the code 
from IndexUpgrader (so it opens IndexWriter, sets the special merge policy, 
forceMerge(1), setCommitUserData(getCommitUserData()), and finally commit()). 
Alternatively patch 3.6.x.


was (Author: thetaphi):
[~trejkaz]: This will not be backported to 3.6 (and also 4.10.5 is very 
unlikely). If you want to "ensure" than index was upgraded, I'd suggest to use 
the UpgradeIndexMergePolicy directly and then more or less copy the code from 
IndexUpgrader (so it opens IndexWriter, sets the special merge policy, 
forceMerge(1), setCommitUserData(getCommitUserData()), and finally commit()). 
Alternatively patch 3.6.x.

> IndexUpgrader doesn't upgrade an index if it has zero segments
> --
>
> Key: LUCENE-6658
> URL: https://issues.apache.org/jira/browse/LUCENE-6658
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10.4, 5.2.1
>Reporter: Trejkaz
>Assignee: Uwe Schindler
> Fix For: 4.10.5, 5.3, Trunk
>
> Attachments: LUCENE-6658.patch, LUCENE-6658.patch, LUCENE-6658.patch, 
> LUCENE-6658.patch, empty.4.10.4.zip
>
>
> IndexUpgrader uses merges to do its job. Therefore, if you use it to upgrade 
> an index with no segments, it will do nothing - it won't even update the 
> version numbers in the segments file, meaning that later versions of Lucene 
> will fail to open the index, despite the fact that you "upgraded" it.
> The suggested workaround when this was raised on the mailing list in January 
> seems to be to use filesystem magic to look at the files, figure out whether 
> there are any segments, and write a new empty index if there are none.
> This sounds easy, but there are probably traps. For instance, there might be 
> files in the directory which don't really belong to the index. Earlier 
> versions of Lucene used to have a FilenameFilter which was usable to 
> distinguish one from the other, but that seems to have disappeared, making it 
> less obvious how to do this.
> This issue is presumed to exist in 3.x as well, I just haven't encountered it 
> yet because the only empty indices I have hit have been later versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6658) IndexUpgrader doesn't upgrade an index if it has zero segments

2015-07-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615175#comment-14615175
 ] 

Uwe Schindler commented on LUCENE-6658:
---

[~trejkaz]: This will not be backported to 3.6 (and also 4.10.5 is very 
unlikely). If you want to "ensure" than index was upgraded, I'd suggest to use 
the UpgradeIndexMergePolicy directly and then more or less copy the code from 
IndexUpgrader (so it opens IndexWriter, sets the special merge policy, 
forceMerge(1), setCommitUserData(getCommitUserData()), and finally commit()). 
Alternatively patch 3.6.x.

> IndexUpgrader doesn't upgrade an index if it has zero segments
> --
>
> Key: LUCENE-6658
> URL: https://issues.apache.org/jira/browse/LUCENE-6658
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10.4, 5.2.1
>Reporter: Trejkaz
>Assignee: Uwe Schindler
> Fix For: 4.10.5, 5.3, Trunk
>
> Attachments: LUCENE-6658.patch, LUCENE-6658.patch, LUCENE-6658.patch, 
> LUCENE-6658.patch, empty.4.10.4.zip
>
>
> IndexUpgrader uses merges to do its job. Therefore, if you use it to upgrade 
> an index with no segments, it will do nothing - it won't even update the 
> version numbers in the segments file, meaning that later versions of Lucene 
> will fail to open the index, despite the fact that you "upgraded" it.
> The suggested workaround when this was raised on the mailing list in January 
> seems to be to use filesystem magic to look at the files, figure out whether 
> there are any segments, and write a new empty index if there are none.
> This sounds easy, but there are probably traps. For instance, there might be 
> files in the directory which don't really belong to the index. Earlier 
> versions of Lucene used to have a FilenameFilter which was usable to 
> distinguish one from the other, but that seems to have disappeared, making it 
> less obvious how to do this.
> This issue is presumed to exist in 3.x as well, I just haven't encountered it 
> yet because the only empty indices I have hit have been later versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7755) An API to edit the security params

2015-07-06 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615176#comment-14615176
 ] 

Noble Paul commented on SOLR-7755:
--

bq. Can't all this be a wrapper around the /security.json in ZK and made 
available as a command line tool similar to zkcli?
I don't think exposing ZK to users is a good/safe practice

bq.admin might want to plan and setup security parameters in a cluster even 
before starting Solr

That is pretty simple , You start with a standard no-edit permission 
{{security.json}}. it will be a part of this ticket . Which gives the admin 
user the privilege to edit the security parameters

bq.Also, authc/authz plugins in an already started up Solr cluster can add 
watches to the /security.json in ZK to monitor changes made through such a 
command line tool

NO. We want the the authc/authz plugins to just deal with security instead of 
screwing up/editing ZK nodes

bq.that way, this API or "framework" wouldn't need to know what all to expect 
(i.e. "create-permission" or "add-user" or anything plugin specific).

The framework has no idea of what is {{create-permission}} it is the plugin's 
responsibility to interpret this stuff. Wait for the first patch to see how it 
is done

bq.Another challenge, that comes to mind, with having an endpoint like this: 
how would we secure this endpoint itself?
Isn't the same way it is done in all data stores?  They give admin privileges 
to to the admin and he can do further edits

> An API to edit the security params
> --
>
> Key: SOLR-7755
> URL: https://issues.apache.org/jira/browse/SOLR-7755
> Project: Solr
>  Issue Type: Sub-task
>  Components: security
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> example
> {code}
> curl http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{
> "add-user" : {"name" : "tom", 
>  "role": ["admin","dev"]
>  },
> "create-permission" :{"name":"mycoll-update",
>   "before" :"some-other-permission",
>   "path":"/update/*"
>   "role":["dev","admin"]
>   }
> }'
> {code}
> Please note that the set of parameters required for a basic ZK based impl 
> will be completely different from that of a Kerberos implementation. However 
> the framework would remain the same. The end point will remain the same, 
> though



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7756) NPE in ExactStatsCache when a term doesn't exist on a shard

2015-07-06 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-7756:
---

 Summary: NPE in ExactStatsCache when a term doesn't exist on a 
shard
 Key: SOLR-7756
 URL: https://issues.apache.org/jira/browse/SOLR-7756
 Project: Solr
  Issue Type: Bug
Reporter: Varun Thacker


If a term doesn't exist on a shard {{ExactStatsCache#getPerShardTermStats}} 
throws an NullPointerException. 

Attaching a test and a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6658) IndexUpgrader doesn't upgrade an index if it has zero segments

2015-07-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615162#comment-14615162
 ] 

Uwe Schindler commented on LUCENE-6658:
---

I also backported to 4.10.5 (was a bit harder, because we had no Version in the 
SegmentInfos). To check that the commit was actually applied, I check the 
generation there.

> IndexUpgrader doesn't upgrade an index if it has zero segments
> --
>
> Key: LUCENE-6658
> URL: https://issues.apache.org/jira/browse/LUCENE-6658
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10.4, 5.2.1
>Reporter: Trejkaz
>Assignee: Uwe Schindler
> Fix For: 4.10.5, 5.3, Trunk
>
> Attachments: LUCENE-6658.patch, LUCENE-6658.patch, LUCENE-6658.patch, 
> LUCENE-6658.patch, empty.4.10.4.zip
>
>
> IndexUpgrader uses merges to do its job. Therefore, if you use it to upgrade 
> an index with no segments, it will do nothing - it won't even update the 
> version numbers in the segments file, meaning that later versions of Lucene 
> will fail to open the index, despite the fact that you "upgraded" it.
> The suggested workaround when this was raised on the mailing list in January 
> seems to be to use filesystem magic to look at the files, figure out whether 
> there are any segments, and write a new empty index if there are none.
> This sounds easy, but there are probably traps. For instance, there might be 
> files in the directory which don't really belong to the index. Earlier 
> versions of Lucene used to have a FilenameFilter which was usable to 
> distinguish one from the other, but that seems to have disappeared, making it 
> less obvious how to do this.
> This issue is presumed to exist in 3.x as well, I just haven't encountered it 
> yet because the only empty indices I have hit have been later versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6658) IndexUpgrader doesn't upgrade an index if it has zero segments

2015-07-06 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-6658.
---
Resolution: Fixed

> IndexUpgrader doesn't upgrade an index if it has zero segments
> --
>
> Key: LUCENE-6658
> URL: https://issues.apache.org/jira/browse/LUCENE-6658
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10.4, 5.2.1
>Reporter: Trejkaz
>Assignee: Uwe Schindler
> Fix For: 4.10.5, 5.3, Trunk
>
> Attachments: LUCENE-6658.patch, LUCENE-6658.patch, LUCENE-6658.patch, 
> LUCENE-6658.patch, empty.4.10.4.zip
>
>
> IndexUpgrader uses merges to do its job. Therefore, if you use it to upgrade 
> an index with no segments, it will do nothing - it won't even update the 
> version numbers in the segments file, meaning that later versions of Lucene 
> will fail to open the index, despite the fact that you "upgraded" it.
> The suggested workaround when this was raised on the mailing list in January 
> seems to be to use filesystem magic to look at the files, figure out whether 
> there are any segments, and write a new empty index if there are none.
> This sounds easy, but there are probably traps. For instance, there might be 
> files in the directory which don't really belong to the index. Earlier 
> versions of Lucene used to have a FilenameFilter which was usable to 
> distinguish one from the other, but that seems to have disappeared, making it 
> less obvious how to do this.
> This issue is presumed to exist in 3.x as well, I just haven't encountered it 
> yet because the only empty indices I have hit have been later versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6660) Assertion fails for ToParentBlockJoinQuery$BlockJoinScorer.nextDoc

2015-07-06 Thread Christian Danninger (JIRA)
Christian Danninger created LUCENE-6660:
---

 Summary: Assertion fails for 
ToParentBlockJoinQuery$BlockJoinScorer.nextDoc
 Key: LUCENE-6660
 URL: https://issues.apache.org/jira/browse/LUCENE-6660
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 5.2.1
 Environment: Running Solr 5.2.1 on Windows x64
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) Client VM (build 24.51-b03, mixed mode, sharing)
Reporter: Christian Danninger


After I enable assertion with "-ea:org.apache..." I got the stack trace below. 
I checked that the parent filter only match parent documents and the child 
filter only match child documents. Field "id" is unique.

16:55:06,269 ERROR [org.apache.solr.servlet.SolrDispatchFilter] 
(http-127.0.0.1/127.0.0.1:8080-1) null:java.lang.RuntimeException: 
java.lang.AssertionError
at org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:593)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:465)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:246)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:149)
at 
org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:169)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:145)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:97)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:559)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:102)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:336)
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:856)
at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:653)
at 
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:920)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.AssertionError
at 
org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:278)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:204)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:176)
at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:771)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:485)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:202)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1666)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1485)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:561)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:518)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:255)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
... 16 more

Without assertions enabled:
17:21:39,008 ERROR [org.apache.solr.servlet.SolrDispatchFilter] 
(http-127.0.0.1/127.0.0.1:8080-1) null:java.lang.IllegalStateException: child 
query must only match non-parent docs, but parent docID=2147483647 matched 
childScorer=class org.apache.lucene.search.ConjunctionScorer
at 
org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:334)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6658) IndexUpgrader doesn't upgrade an index if it has zero segments

2015-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615160#comment-14615160
 ] 

ASF subversion and git services commented on LUCENE-6658:
-

Commit 1689424 from [~thetaphi] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1689424 ]

Merged revision(s) 1689411 from lucene/dev/branches/branch_5x:
LUCENE-6658: Fix IndexUpgrader to also upgrade indexes without any segments

> IndexUpgrader doesn't upgrade an index if it has zero segments
> --
>
> Key: LUCENE-6658
> URL: https://issues.apache.org/jira/browse/LUCENE-6658
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10.4, 5.2.1
>Reporter: Trejkaz
>Assignee: Uwe Schindler
> Fix For: 4.10.5, 5.3, Trunk
>
> Attachments: LUCENE-6658.patch, LUCENE-6658.patch, LUCENE-6658.patch, 
> LUCENE-6658.patch, empty.4.10.4.zip
>
>
> IndexUpgrader uses merges to do its job. Therefore, if you use it to upgrade 
> an index with no segments, it will do nothing - it won't even update the 
> version numbers in the segments file, meaning that later versions of Lucene 
> will fail to open the index, despite the fact that you "upgraded" it.
> The suggested workaround when this was raised on the mailing list in January 
> seems to be to use filesystem magic to look at the files, figure out whether 
> there are any segments, and write a new empty index if there are none.
> This sounds easy, but there are probably traps. For instance, there might be 
> files in the directory which don't really belong to the index. Earlier 
> versions of Lucene used to have a FilenameFilter which was usable to 
> distinguish one from the other, but that seems to have disappeared, making it 
> less obvious how to do this.
> This issue is presumed to exist in 3.x as well, I just haven't encountered it 
> yet because the only empty indices I have hit have been later versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7755) An API to edit the security params

2015-07-06 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615159#comment-14615159
 ] 

Ishan Chattopadhyaya commented on SOLR-7755:


Why does this need to be an endpoint in Solr? Can't all this be a wrapper 
around the /security.json in ZK and made available as a command line tool 
similar to zkcli?
The reason I think this shouldn't be an endpoint in Solr is that an admin might 
want to plan and setup security parameters in a cluster even before starting 
Solr. Also, authc/authz plugins in an already started up Solr cluster can add 
watches to the /security.json in ZK to monitor changes made through such a 
command line tool. That way, this API or "framework" wouldn't need to know what 
all to expect (i.e. "create-permission" or "add-user" or anything plugin 
specific). 

Another challenge, that comes to mind, with having an endpoint like this: how 
would we secure this endpoint itself?

Thoughts, [~anshumg]?

> An API to edit the security params
> --
>
> Key: SOLR-7755
> URL: https://issues.apache.org/jira/browse/SOLR-7755
> Project: Solr
>  Issue Type: Sub-task
>  Components: security
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> example
> {code}
> curl http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{
> "add-user" : {"name" : "tom", 
>  "role": ["admin","dev"]
>  },
> "create-permission" :{"name":"mycoll-update",
>   "before" :"some-other-permission",
>   "path":"/update/*"
>   "role":["dev","admin"]
>   }
> }'
> {code}
> Please note that the set of parameters required for a basic ZK based impl 
> will be completely different from that of a Kerberos implementation. However 
> the framework would remain the same. The end point will remain the same, 
> though



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6659) Remove IndexWriterConfig.get/setMaxThreadStates

2015-07-06 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6659:
---
Attachment: LUCENE-6659.patch

Patch, I think it's ready ... I'm beasting tests now.

> Remove IndexWriterConfig.get/setMaxThreadStates
> ---
>
> Key: LUCENE-6659
> URL: https://issues.apache.org/jira/browse/LUCENE-6659
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.3, Trunk
>
> Attachments: LUCENE-6659.patch
>
>
> Ever since LUCENE-5644, IndexWriter will aggressively reuse its internal 
> thread states across threads, whenever one is free.
> I think this means we can safely remove the sneaky maxThreadStates limit 
> (default 8) that we have today: IW will only ever allocate as many thread 
> states as there are actual concurrent threads running through it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6659) Remove IndexWriterConfig.get/setMaxThreadStates

2015-07-06 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-6659:
--

 Summary: Remove IndexWriterConfig.get/setMaxThreadStates
 Key: LUCENE-6659
 URL: https://issues.apache.org/jira/browse/LUCENE-6659
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.3, Trunk


Ever since LUCENE-5644, IndexWriter will aggressively reuse its internal thread 
states across threads, whenever one is free.

I think this means we can safely remove the sneaky maxThreadStates limit 
(default 8) that we have today: IW will only ever allocate as many thread 
states as there are actual concurrent threads running through it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7755) An API to edit the security params

2015-07-06 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7755:
-
Description: 
example
{code}
curl http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json' -d '{
"add-user" : {"name" : "tom", 
 "role": ["admin","dev"]
 },
"create-permission" :{"name":"mycoll-update",
  "before" :"some-other-permission",
  "path":"/update/*"
  "role":["dev","admin"]
  }

}'
{code}

Please note that the set of parameters required for a basic ZK based impl will 
be completely different from that of a Kerberos implementation. However the 
framework would remain the same. The end point will remain the same, though

  was:
example
{code}
curl http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json' -d '{
"add-user" : {"name" : "tom", 
 "role": ["admin","dev"]
 },
"create-permission" :{"name":"mycoll-update",
  "before" :"some-other-permission",
  "path":"/update/*"
  "role":["dev","admin"]
  }

}'
{code}


> An API to edit the security params
> --
>
> Key: SOLR-7755
> URL: https://issues.apache.org/jira/browse/SOLR-7755
> Project: Solr
>  Issue Type: Sub-task
>  Components: security
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> example
> {code}
> curl http://localhost:8983/solr/admin/authorization -H 
> 'Content-type:application/json' -d '{
> "add-user" : {"name" : "tom", 
>  "role": ["admin","dev"]
>  },
> "create-permission" :{"name":"mycoll-update",
>   "before" :"some-other-permission",
>   "path":"/update/*"
>   "role":["dev","admin"]
>   }
> }'
> {code}
> Please note that the set of parameters required for a basic ZK based impl 
> will be completely different from that of a Kerberos implementation. However 
> the framework would remain the same. The end point will remain the same, 
> though



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6658) IndexUpgrader doesn't upgrade an index if it has zero segments

2015-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615123#comment-14615123
 ] 

ASF subversion and git services commented on LUCENE-6658:
-

Commit 1689420 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1689420 ]

Merged revision(s) 1689411 from lucene/dev/branches/branch_5x:
LUCENE-6658: Fix IndexUpgrader to also upgrade indexes without any segments

> IndexUpgrader doesn't upgrade an index if it has zero segments
> --
>
> Key: LUCENE-6658
> URL: https://issues.apache.org/jira/browse/LUCENE-6658
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10.4, 5.2.1
>Reporter: Trejkaz
>Assignee: Uwe Schindler
> Fix For: 4.10.5, 5.3, Trunk
>
> Attachments: LUCENE-6658.patch, LUCENE-6658.patch, LUCENE-6658.patch, 
> LUCENE-6658.patch, empty.4.10.4.zip
>
>
> IndexUpgrader uses merges to do its job. Therefore, if you use it to upgrade 
> an index with no segments, it will do nothing - it won't even update the 
> version numbers in the segments file, meaning that later versions of Lucene 
> will fail to open the index, despite the fact that you "upgraded" it.
> The suggested workaround when this was raised on the mailing list in January 
> seems to be to use filesystem magic to look at the files, figure out whether 
> there are any segments, and write a new empty index if there are none.
> This sounds easy, but there are probably traps. For instance, there might be 
> files in the directory which don't really belong to the index. Earlier 
> versions of Lucene used to have a FilenameFilter which was usable to 
> distinguish one from the other, but that seems to have disappeared, making it 
> less obvious how to do this.
> This issue is presumed to exist in 3.x as well, I just haven't encountered it 
> yet because the only empty indices I have hit have been later versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7755) An API to edit the security params

2015-07-06 Thread Noble Paul (JIRA)
Noble Paul created SOLR-7755:


 Summary: An API to edit the security params
 Key: SOLR-7755
 URL: https://issues.apache.org/jira/browse/SOLR-7755
 Project: Solr
  Issue Type: Sub-task
  Components: security
Reporter: Noble Paul
Assignee: Noble Paul


example
{code}
curl http://localhost:8983/solr/admin/authorization -H 
'Content-type:application/json' -d '{
"add-user" : {"name" : "tom", 
 "role": ["admin","dev"]
 },
"create-permission" :{"name":"mycoll-update",
  "before" :"some-other-permission",
  "path":"/update/*"
  "role":["dev","admin"]
  }

}'
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [CI] Lucene 5x Linux 64 Test Only - Build # 54457 - Failure!

2015-07-06 Thread david.w.smi...@gmail.com
Ok, later tonight. I'm on vacation today.

On Mon, Jul 6, 2015 at 9:47 AM Michael McCandless 
wrote:

> Hmm this could be LUCENE-6629?  Can you add a comment on the issue,
> copying the build failure stack trace, etc.?  Maybe it helps us get to the
> root cause of these weird hangs...
>
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Sun, Jul 5, 2015 at 10:04 PM, david.w.smi...@gmail.com <
> david.w.smi...@gmail.com> wrote:
>
>> I’m not sure what to make of this.  The whole suite timed out after 2
>> hours.  The seed doesn’t reproduce, at least when I ran via IntelliJ just
>> this spatial test.  It’d be nice if I could have CI re-run the same build
>> with same chosen random seed and JVM args, etc.
>>
>> On Sun, Jul 5, 2015 at 9:36 AM  wrote:
>>
>>>   *BUILD FAILURE*
>>> Build URL
>>> http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/54457/
>>> Project:lucene_linux_java8_64_test_only Randomization: 
>>> JDKEA9,network,heap[571m],-server
>>> +UseSerialGC +UseCompressedOops +AggressiveOpts,assert off,sec manager on 
>>> Date
>>> of build:Sun, 05 Jul 2015 13:23:16 +0200 Build duration:2 hr 8 min
>>>  *CHANGES* No Changes
>>>  *BUILD ARTIFACTS*
>>>
>>>-
>>>
>>> checkout/lucene/build/spatial/test/temp/junit4-J0-20150705_133143_170.events
>>>
>>> 
>>>-
>>>
>>> checkout/lucene/build/spatial/test/temp/junit4-J1-20150705_133143_170.events
>>>
>>> 
>>>-
>>>
>>> checkout/lucene/build/spatial/test/temp/junit4-J2-20150705_133143_170.events
>>>
>>> 
>>>-
>>>
>>> checkout/lucene/build/spatial/test/temp/junit4-J3-20150705_133143_170.events
>>>
>>> 
>>>-
>>>
>>> checkout/lucene/build/spatial/test/temp/junit4-J4-20150705_133143_170.events
>>>
>>> 
>>>-
>>>
>>> checkout/lucene/build/spatial/test/temp/junit4-J5-20150705_133143_171.events
>>>
>>> 
>>>-
>>>
>>> checkout/lucene/build/spatial/test/temp/junit4-J6-20150705_133143_171.events
>>>
>>> 
>>>-
>>>
>>> checkout/lucene/build/spatial/test/temp/junit4-J7-20150705_133143_171.events
>>>
>>> 
>>>
>>>
>>>  *FAILED JUNIT TESTS* Name: junit.framework Failed: 1 test(s), Passed:
>>> 0 test(s), Skipped: 0 test(s), Total: 1 test(s)
>>>
>>>
>>> * - Failed:
>>> junit.framework.TestSuite.org.apache.lucene.spatial.bbox.TestBBoxStrategy * 
>>> Name:
>>> org.apache.lucene.spatial.bbox Failed: 1 test(s), Passed: 1 test(s),
>>> Skipped: 0 test(s), Total: 2 test(s)
>>>
>>>
>>> * - Failed:
>>> org.apache.lucene.spatial.bbox.TestBBoxStrategy.testCitiesIntersectsBBox *
>>>  *CONSOLE OUTPUT* [...truncated 11575 lines...] [junit4]  [junit4]
>>> [junit4] JVM J0: 0.83 .. 9.63 = 8.80s [junit4] JVM J1: 1.07 .. 11.36 =
>>> 10.28s [junit4] JVM J2: 0.91 .. 7224.92 = 7224.01s [junit4] JVM J3:
>>> 1.08 .. 11.85 = 10.77s [junit4] JVM J4: 0.90 .. 8.82 = 7.93s [junit4]
>>> JVM J5: 0.87 .. 9.04 = 8.18s [junit4] JVM J6: 1.09 .. 11.50 = 10.41s 
>>> [junit4]
>>> JVM J7: 0.91 .. 10.98 = 10.07s [junit4] Execution time total: 2 hours
>>> 24 seconds [junit4] Tests summary: 30 suites, 232 tests, 1 suite-level
>>> error, 1 error, 2 ignored (2 assumptions) BUILD FAILED 
>>> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/build.xml:467:
>>> The following error occurred while executing this line: 
>>> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:2240:
>>> The following error occurred while executing this line: 
>>> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/module-build.xml:58:
>>> The following error occurred while executing this line: 
>>> /home/jenkins/workspace/lucene_linux_java8_64_test_only/checkout/lucene/common-build.xml:1444:
>>> The following erro

[jira] [Commented] (LUCENE-6658) IndexUpgrader doesn't upgrade an index if it has zero segments

2015-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615091#comment-14615091
 ] 

ASF subversion and git services commented on LUCENE-6658:
-

Commit 1689411 from [~thetaphi] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1689411 ]

LUCENE-6658: Fix IndexUpgrader to also upgrade indexes without any segments

> IndexUpgrader doesn't upgrade an index if it has zero segments
> --
>
> Key: LUCENE-6658
> URL: https://issues.apache.org/jira/browse/LUCENE-6658
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10.4, 5.2.1
>Reporter: Trejkaz
>Assignee: Uwe Schindler
> Fix For: 4.10.5, 5.3, Trunk
>
> Attachments: LUCENE-6658.patch, LUCENE-6658.patch, LUCENE-6658.patch, 
> LUCENE-6658.patch, empty.4.10.4.zip
>
>
> IndexUpgrader uses merges to do its job. Therefore, if you use it to upgrade 
> an index with no segments, it will do nothing - it won't even update the 
> version numbers in the segments file, meaning that later versions of Lucene 
> will fail to open the index, despite the fact that you "upgraded" it.
> The suggested workaround when this was raised on the mailing list in January 
> seems to be to use filesystem magic to look at the files, figure out whether 
> there are any segments, and write a new empty index if there are none.
> This sounds easy, but there are probably traps. For instance, there might be 
> files in the directory which don't really belong to the index. Earlier 
> versions of Lucene used to have a FilenameFilter which was usable to 
> distinguish one from the other, but that seems to have disappeared, making it 
> less obvious how to do this.
> This issue is presumed to exist in 3.x as well, I just haven't encountered it 
> yet because the only empty indices I have hit have been later versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6639) LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first scorer is skipped

2015-07-06 Thread Terry Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615087#comment-14615087
 ] 

Terry Smith commented on LUCENE-6639:
-

Ah, I didn't realize the highlighters were creating the weights to extract the 
terms, that makes sense.

I like the idea of just calling onUse() the first time scorer() is called, that 
ought to be more robust and is very easy to understand.


> LRUQueryCache.CachingWrapperWeight not calling policy.onUse() if the first 
> scorer is skipped
> 
>
> Key: LUCENE-6639
> URL: https://issues.apache.org/jira/browse/LUCENE-6639
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Terry Smith
>Priority: Minor
> Attachments: LUCENE-6639.patch
>
>
> The method 
> {{org.apache.lucene.search.LRUQueryCache.CachingWrapperWeight.scorer(LeafReaderContext)}}
>  starts with
> {code}
> if (context.ord == 0) {
> policy.onUse(getQuery());
> }
> {code}
> which can result in a missed call for queries that return a null scorer for 
> the first segment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6658) IndexUpgrader doesn't upgrade an index if it has zero segments

2015-07-06 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6658:
--
Fix Version/s: Trunk
   5.3
   4.10.5

> IndexUpgrader doesn't upgrade an index if it has zero segments
> --
>
> Key: LUCENE-6658
> URL: https://issues.apache.org/jira/browse/LUCENE-6658
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10.4, 5.2.1
>Reporter: Trejkaz
>Assignee: Uwe Schindler
> Fix For: 4.10.5, 5.3, Trunk
>
> Attachments: LUCENE-6658.patch, LUCENE-6658.patch, LUCENE-6658.patch, 
> LUCENE-6658.patch, empty.4.10.4.zip
>
>
> IndexUpgrader uses merges to do its job. Therefore, if you use it to upgrade 
> an index with no segments, it will do nothing - it won't even update the 
> version numbers in the segments file, meaning that later versions of Lucene 
> will fail to open the index, despite the fact that you "upgraded" it.
> The suggested workaround when this was raised on the mailing list in January 
> seems to be to use filesystem magic to look at the files, figure out whether 
> there are any segments, and write a new empty index if there are none.
> This sounds easy, but there are probably traps. For instance, there might be 
> files in the directory which don't really belong to the index. Earlier 
> versions of Lucene used to have a FilenameFilter which was usable to 
> distinguish one from the other, but that seems to have disappeared, making it 
> less obvious how to do this.
> This issue is presumed to exist in 3.x as well, I just haven't encountered it 
> yet because the only empty indices I have hit have been later versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6563) MockFileSystemTestCase.testURI should be improved to handle cases where OS/JVM cannot create non-ASCII filenames

2015-07-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615065#comment-14615065
 ] 

ASF GitHub Bot commented on LUCENE-6563:


GitHub user cpoerschke opened a pull request:

https://github.com/apache/lucene-solr/pull/182

LUCENE-6563: tweak MockFileSystemTestCase.testURI assumptions

for https://issues.apache.org/jira/i#browse/LUCENE-6563

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr trunk-lucene-6563

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/182.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #182


commit 6d792bba409fe080efad30c1e03f060f3c66a039
Author: Christine Poerschke 
Date:   2015-07-06T13:34:12Z

LUCENE-6563: tweak MockFileSystemTestCase.testURI assumptions

* testURI itself now is only for plain ASCII file name
* chinese file name now is in testURIchinese
* also added a testURIumlaute file name case
* implTestURI factored out to hold the test logic itself (if resolve fails 
for non-ASCII file names then the toUri part of the test is skipped)




> MockFileSystemTestCase.testURI should be improved to handle cases where 
> OS/JVM cannot create non-ASCII filenames
> 
>
> Key: LUCENE-6563
> URL: https://issues.apache.org/jira/browse/LUCENE-6563
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Dawid Weiss
>Priority: Minor
>
> {{ant test -Dtestcase=TestVerboseFS -Dtests.method=testURI 
> -Dtests.file.encoding=UTF-8}} fails (for example) with 'Oracle Corporation 
> 1.8.0_45 (64-bit)' when the default {{sun.jnu.encoding}} system property is 
> (for example) {{ANSI_X3.4-1968}}
> [details to follow]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: LUCENE-6563: tweak MockFileSystemTestCas...

2015-07-06 Thread cpoerschke
GitHub user cpoerschke opened a pull request:

https://github.com/apache/lucene-solr/pull/182

LUCENE-6563: tweak MockFileSystemTestCase.testURI assumptions

for https://issues.apache.org/jira/i#browse/LUCENE-6563

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr trunk-lucene-6563

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/182.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #182


commit 6d792bba409fe080efad30c1e03f060f3c66a039
Author: Christine Poerschke 
Date:   2015-07-06T13:34:12Z

LUCENE-6563: tweak MockFileSystemTestCase.testURI assumptions

* testURI itself now is only for plain ASCII file name
* chinese file name now is in testURIchinese
* also added a testURIumlaute file name case
* implTestURI factored out to hold the test logic itself (if resolve fails 
for non-ASCII file names then the toUri part of the test is skipped)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6658) IndexUpgrader doesn't upgrade an index if it has zero segments

2015-07-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615059#comment-14615059
 ] 

Michael McCandless commented on LUCENE-6658:


bq. I changed the fake userdata call to:

+1, thanks [~thetaphi]

> IndexUpgrader doesn't upgrade an index if it has zero segments
> --
>
> Key: LUCENE-6658
> URL: https://issues.apache.org/jira/browse/LUCENE-6658
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10.4, 5.2.1
>Reporter: Trejkaz
>Assignee: Uwe Schindler
> Attachments: LUCENE-6658.patch, LUCENE-6658.patch, LUCENE-6658.patch, 
> LUCENE-6658.patch, empty.4.10.4.zip
>
>
> IndexUpgrader uses merges to do its job. Therefore, if you use it to upgrade 
> an index with no segments, it will do nothing - it won't even update the 
> version numbers in the segments file, meaning that later versions of Lucene 
> will fail to open the index, despite the fact that you "upgraded" it.
> The suggested workaround when this was raised on the mailing list in January 
> seems to be to use filesystem magic to look at the files, figure out whether 
> there are any segments, and write a new empty index if there are none.
> This sounds easy, but there are probably traps. For instance, there might be 
> files in the directory which don't really belong to the index. Earlier 
> versions of Lucene used to have a FilenameFilter which was usable to 
> distinguish one from the other, but that seems to have disappeared, making it 
> less obvious how to do this.
> This issue is presumed to exist in 3.x as well, I just haven't encountered it 
> yet because the only empty indices I have hit have been later versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6658) IndexUpgrader doesn't upgrade an index if it has zero segments

2015-07-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615058#comment-14615058
 ] 

Michael McCandless commented on LUCENE-6658:


bq. Or is commit data preserved while opening IndexWriter and reused when 
committing to a new commit point? If this is the case, I can use get/set maybe?

Yeah, it will be preserved, carried over from the commit point that IW had 
opened (the latest commit point in this case).

+1 to use get/set.

> IndexUpgrader doesn't upgrade an index if it has zero segments
> --
>
> Key: LUCENE-6658
> URL: https://issues.apache.org/jira/browse/LUCENE-6658
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10.4, 5.2.1
>Reporter: Trejkaz
>Assignee: Uwe Schindler
> Attachments: LUCENE-6658.patch, LUCENE-6658.patch, LUCENE-6658.patch, 
> LUCENE-6658.patch, empty.4.10.4.zip
>
>
> IndexUpgrader uses merges to do its job. Therefore, if you use it to upgrade 
> an index with no segments, it will do nothing - it won't even update the 
> version numbers in the segments file, meaning that later versions of Lucene 
> will fail to open the index, despite the fact that you "upgraded" it.
> The suggested workaround when this was raised on the mailing list in January 
> seems to be to use filesystem magic to look at the files, figure out whether 
> there are any segments, and write a new empty index if there are none.
> This sounds easy, but there are probably traps. For instance, there might be 
> files in the directory which don't really belong to the index. Earlier 
> versions of Lucene used to have a FilenameFilter which was usable to 
> distinguish one from the other, but that seems to have disappeared, making it 
> less obvious how to do this.
> This issue is presumed to exist in 3.x as well, I just haven't encountered it 
> yet because the only empty indices I have hit have been later versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6658) IndexUpgrader doesn't upgrade an index if it has zero segments

2015-07-06 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6658:
--
Attachment: LUCENE-6658.patch

I changed the fake userdata call to:
{code:java}
w.setCommitData(w.getCommitData()); // fake change to enforce a commit (e.g. if 
index has no segments)
{code}

> IndexUpgrader doesn't upgrade an index if it has zero segments
> --
>
> Key: LUCENE-6658
> URL: https://issues.apache.org/jira/browse/LUCENE-6658
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.10.4, 5.2.1
>Reporter: Trejkaz
>Assignee: Uwe Schindler
> Attachments: LUCENE-6658.patch, LUCENE-6658.patch, LUCENE-6658.patch, 
> LUCENE-6658.patch, empty.4.10.4.zip
>
>
> IndexUpgrader uses merges to do its job. Therefore, if you use it to upgrade 
> an index with no segments, it will do nothing - it won't even update the 
> version numbers in the segments file, meaning that later versions of Lucene 
> will fail to open the index, despite the fact that you "upgraded" it.
> The suggested workaround when this was raised on the mailing list in January 
> seems to be to use filesystem magic to look at the files, figure out whether 
> there are any segments, and write a new empty index if there are none.
> This sounds easy, but there are probably traps. For instance, there might be 
> files in the directory which don't really belong to the index. Earlier 
> versions of Lucene used to have a FilenameFilter which was usable to 
> distinguish one from the other, but that seems to have disappeared, making it 
> less obvious how to do this.
> This issue is presumed to exist in 3.x as well, I just haven't encountered it 
> yet because the only empty indices I have hit have been later versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6365) Optimized iteration of finite strings

2015-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615053#comment-14615053
 ] 

ASF subversion and git services commented on LUCENE-6365:
-

Commit 1689405 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1689405 ]

LUCENE-6365: fix test to not add duplicate strings

> Optimized iteration of finite strings
> -
>
> Key: LUCENE-6365
> URL: https://issues.apache.org/jira/browse/LUCENE-6365
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: 5.0
>Reporter: Markus Heiden
>Priority: Minor
>  Labels: patch, performance
> Fix For: 5.3, Trunk
>
> Attachments: FiniteStrings_noreuse.patch, FiniteStrings_reuse.patch, 
> LUCENE-6365.patch
>
>
> Replaced Operations.getFiniteStrings() by an optimized FiniteStringIterator.
> Benefits:
> Avoid huge hash set of finite strings.
> Avoid massive object/array creation during processing.
> "Downside":
> Iteration order changed, so when iterating with a limit, the result may 
> differ slightly. Old: emit current node, if accept / recurse. New: recurse / 
> emit current node, if accept.
> The old method Operations.getFiniteStrings() still exists, because it eases 
> the tests. It is now implemented by use of the new FiniteStringIterator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >