[jira] [Commented] (SOLR-9408) Add solr commit data in TreeMergeRecordWriter

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486304#comment-15486304
 ] 

ASF subversion and git services commented on SOLR-9408:
---

Commit 2335cf7cd52323c02041f28ebdbf7f8c5bb5bb4e in lucene-solr's branch 
refs/heads/branch_6_2 from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2335cf7 ]

SOLR-9408: Fix TreeMergeOutputFormat to add timestamp metadata to commits


> Add solr commit data in TreeMergeRecordWriter
> -
>
> Key: SOLR-9408
> URL: https://issues.apache.org/jira/browse/SOLR-9408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - MapReduce
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: mapreduce, solrcloud
> Fix For: 6.2.1, master (7.0)
>
> Attachments: SOLR-9408.patch, SOLR-9408.patch, SOLR-9408.patch
>
>
> The lucene index produced by TreeMergeRecordWriter when the segments are 
> merged doesn't contain Solr's commit data, specifically, commitTimeMsec.
> This means that when this index is subsequently loaded into SolrCloud and if 
> the index stays unchanged so no newer commits occurs, ADDREPLICA will appear 
> to succeed but will not actually do any full sync due to SOLR-9369, resulting 
> in adding an empty index as a replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9408) Add solr commit data in TreeMergeRecordWriter

2016-09-12 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-9408.
-
   Resolution: Fixed
Fix Version/s: (was: 6.3)
   6.2.1

Thanks Jessica for the patch and Shalin for the review!

> Add solr commit data in TreeMergeRecordWriter
> -
>
> Key: SOLR-9408
> URL: https://issues.apache.org/jira/browse/SOLR-9408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - MapReduce
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: mapreduce, solrcloud
> Fix For: 6.2.1, master (7.0)
>
> Attachments: SOLR-9408.patch, SOLR-9408.patch, SOLR-9408.patch
>
>
> The lucene index produced by TreeMergeRecordWriter when the segments are 
> merged doesn't contain Solr's commit data, specifically, commitTimeMsec.
> This means that when this index is subsequently loaded into SolrCloud and if 
> the index stays unchanged so no newer commits occurs, ADDREPLICA will appear 
> to succeed but will not actually do any full sync due to SOLR-9369, resulting 
> in adding an empty index as a replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9408) Add solr commit data in TreeMergeRecordWriter

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486299#comment-15486299
 ] 

ASF subversion and git services commented on SOLR-9408:
---

Commit 08453fb7f000342352c6c08dcdf83cdbda1694c6 in lucene-solr's branch 
refs/heads/branch_6x from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=08453fb ]

SOLR-9408: Fix TreeMergeOutputFormat to add timestamp metadata to commits


> Add solr commit data in TreeMergeRecordWriter
> -
>
> Key: SOLR-9408
> URL: https://issues.apache.org/jira/browse/SOLR-9408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - MapReduce
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: mapreduce, solrcloud
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9408.patch, SOLR-9408.patch, SOLR-9408.patch
>
>
> The lucene index produced by TreeMergeRecordWriter when the segments are 
> merged doesn't contain Solr's commit data, specifically, commitTimeMsec.
> This means that when this index is subsequently loaded into SolrCloud and if 
> the index stays unchanged so no newer commits occurs, ADDREPLICA will appear 
> to succeed but will not actually do any full sync due to SOLR-9369, resulting 
> in adding an empty index as a replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9408) Add solr commit data in TreeMergeRecordWriter

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15486296#comment-15486296
 ] 

ASF subversion and git services commented on SOLR-9408:
---

Commit ef3057e43b6c3783f1324b2893eeb8702c86487c in lucene-solr's branch 
refs/heads/master from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ef3057e ]

SOLR-9408: Fix TreeMergeOutputFormat to add timestamp metadata to commits


> Add solr commit data in TreeMergeRecordWriter
> -
>
> Key: SOLR-9408
> URL: https://issues.apache.org/jira/browse/SOLR-9408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - MapReduce
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: mapreduce, solrcloud
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9408.patch, SOLR-9408.patch, SOLR-9408.patch
>
>
> The lucene index produced by TreeMergeRecordWriter when the segments are 
> merged doesn't contain Solr's commit data, specifically, commitTimeMsec.
> This means that when this index is subsequently loaded into SolrCloud and if 
> the index stays unchanged so no newer commits occurs, ADDREPLICA will appear 
> to succeed but will not actually do any full sync due to SOLR-9369, resulting 
> in adding an empty index as a replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Erroneous tokenization behavior

2016-09-12 Thread Sattam Alsubaiee
I'm trying to understand the tokenization behavior in Lucene. When using
the StandardTokenizer in Lucene version 4.7.1, and trying to tokenize the
following string "Tokenize me!" with max token filter set to be 4, I get
only the token "me", but when using Lucene version 4.10.4, I get the
following tokens "Toke", "nize", and "me".

When debugging what's happening, I see that the scanner in version 4.10.4
reads only x number of bytes and then apply the tokenization, where x is
the max token length passed by the user. While in version 4.7.1, the
scanner fills the buffer irrespective of the max token length (it uses the
default buffer size to decide number of bytes it reads every time).

This is the commit that made the change:
https://github.com/apache/lucene-solr/commit/33204ddd895a26a56c1edd92594800ef285f0d4a

You can see in StandardTokenizer.java that this code was added and caused
this behavior:
if (scanner instanceof StandardTokenizerImpl) {
 scanner.setBufferSize(Math.min(length, 1024 * 1024)); // limit buffer
size to 1M chars
}

I also see the same code in master.

Thanks,
Sattam

p.s. Here is the code to reproduce what I'm seeing.
version 4.7.1 (using the jar files here http://archive.apache.org
/dist/lucene/java/4.7.1/)


import java.io.IOException;

import java.io.StringReader;

import org.apache.lucene.analysis.standard.StandardTokenizer;

import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;

import org.apache.lucene.util.AttributeSource.AttributeFactory;

import org.apache.lucene.util.Version;


public class Test {

public static void main(String[] args) throws IOException {

AttributeFactory factory = AttributeFactory.DEFAULT_ATTRI
BUTE_FACTORY;

StandardTokenizer tokenizer = new
StandardTokenizer(Version.LUCENE_47, factory, new
StringReader("Tokenize
me!"));

tokenizer.setMaxTokenLength(4);

tokenizer.reset();

CharTermAttribute attr = tokenizer.addAttribute(CharTer
mAttribute.class);

while (tokenizer.incrementToken()) {

String term = attr.toString();

System.out.println(term);

}

}

}

version 4.10.4 (using the jar files here http://archive.apache.org
/dist/lucene/java/4.10.4/)

import java.io.IOException;

import java.io.StringReader;

import org.apache.lucene.analysis.standard.StandardTokenizer;

import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;

import org.apache.lucene.util.AttributeFactory;


public class Test {

public static void main(String[] args) throws IOException {

AttributeFactory factory = AttributeFactory.DEFAULT_ATTRI
BUTE_FACTORY;


StandardTokenizer tokenizer = new StandardTokenizer(factory, new
StringReader("Tokenize
me!"));

tokenizer.setMaxTokenLength(4);

tokenizer.reset();

CharTermAttribute attr = tokenizer.addAttribute(CharTer
mAttribute.class);

while (tokenizer.incrementToken()) {

String term = attr.toString();

System.out.println(term);

}

}

}


[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 147 - Still Failing

2016-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/147/

No tests ran.

Build Log:
[...truncated 40541 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.03 sec (5.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.3.0-src.tgz...
   [smoker] 30.0 MB in 0.04 sec (735.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.3.0.tgz...
   [smoker] 64.6 MB in 0.16 sec (402.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.3.0.zip...
   [smoker] 75.2 MB in 0.10 sec (774.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6070 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.3.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6070 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.3.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 226 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   5.5.3
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/dev-tools/scripts/smokeTestRelease.py",
 line 1436, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/dev-tools/scripts/smokeTestRelease.py",
 line 1380, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/dev-tools/scripts/smokeTestRelease.py",
 line 1418, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, gitRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/dev-tools/scripts/smokeTestRelease.py",
 line 597, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
gitRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/dev-tools/scripts/smokeTestRelease.py",
 line 743, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(version, unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/dev-tools/scripts/smokeTestRelease.py",
 line 1356, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/build.xml:559: 
exec returned: 1

Total time: 73 minutes 28 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-master - Build # 1390 - Failure

2016-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1390/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestBoolean2

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at __randomizedtesting.SeedInfo.seed([520982AAA1B3B324]:0)
at 
org.apache.lucene.util.packed.PackedInts.getReaderNoHeader(PackedInts.java:802)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsIndexReader.(CompressingStoredFieldsIndexReader.java:91)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.(CompressingStoredFieldsReader.java:135)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsReader(CompressingStoredFieldsFormat.java:121)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:119)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:74)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:696)
at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:300)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:865)
at 
org.apache.lucene.search.TestBoolean2.afterClass(TestBoolean2.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestBoolean2

Error Message:
Clean up static fields (in @AfterClass?) and null them, your test still has 
references to classes of which the sizes cannot be measured due to security 
restrictions or Java 9 module encapsulation:   - private static 
org.apache.lucene.index.IndexReader org.apache.lucene.search.TestBoolean2.reader

Stack Trace:
junit.framework.AssertionFailedError: Clean up static fields (in @AfterClass?) 
and null them, your test still has references to classes of which the sizes 
cannot be measured due to security restrictions or Java 9 module encapsulation:
  - private static org.apache.lucene.index.IndexReader 
org.apache.lucene.search.TestBoolean2.reader
at 
com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:146)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.7.0_80) - Build # 431 - Still Unstable!

2016-09-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/431/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([9F938B470C7335A:D10EFC50DB1CF106]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:624)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11057 lines...]
   [junit4] Suite: 

[jira] [Commented] (SOLR-9493) uniqueKey generation fails if content POSTed as "application/javabin" and uniqueKey field comes as NULL (as opposed to not coming at all).

2016-09-12 Thread Yury Kartsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485887#comment-15485887
 ] 

Yury Kartsev commented on SOLR-9493:


Oh, I see, it's a feature, not a bug... I.e. if I send null value it's 
considered to be a NULL value (as opposed to 'nothing') and will be stored like 
this... Very good to know. Something I've never encountered despite being 
familiar with SOLR to a degree of writing custom similarities :) Thank you, 
I'll try it tomorrow and let you know here.

> uniqueKey generation fails if content POSTed as "application/javabin" and 
> uniqueKey field comes as NULL (as opposed to not coming at all).
> --
>
> Key: SOLR-9493
> URL: https://issues.apache.org/jira/browse/SOLR-9493
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yury Kartsev
> Attachments: 200.png, 400.png, Screen Shot 2016-09-11 at 16.29.50 
> .png, SolrInputDoc_contents.png, SolrInputDoc_headers.png
>
>
> I have faced a weird issue when the same application code (using SolrJ) fails 
> indexing a document without a unique key (should be auto-generated by SOLR) 
> in SolrCloud and succeeds indexing it in standalone SOLR instance (or even in 
> cloud mode, but from web interface of one of the replicas). Difference is 
> obviously only between clients (CloudSolrClient vs HttpSolrClient) and SOLR 
> URLs (Zokeeper hostname+port vs standalone SOLR instance hostname and port). 
> Failure is seen as "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id".
> I am using SOLR 5.1. In cloud mode I have 1 shard and 3 replicas.
> After lot of debugging and investigation (see below as well as my 
> [StackOverflow 
> post|http://stackoverflow.com/questions/39401792/uniquekey-generation-does-not-work-in-solrcloud-but-works-if-standalone])
>  I came to a conclusion that the difference in failing and succeeding calls 
> is simply content type of the POSTing requests. Local proxy clearly shows 
> that the request fails if content is sent as "application/javabin" (see 
> attached screenshot with sensitive data removed) and succeeds if content sent 
> as "application/xml; charset=UTF-8"  (see attached screenshot with sensitive 
> data removed).
> Would you be able to please assist?
> Thank you very much in advance!
> 
> Copying whole description and investigation here as well:
> 
> [Documentation|https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements]
>  states:{quote}Schema defaults and copyFields cannot be used to populate the 
> uniqueKey field. You can use UUIDUpdateProcessorFactory to have uniqueKey 
> values generated automatically.{quote}
> Therefore I have added my uniqueKey field to the schema:{code} name="uuid" class="solr.UUIDField" indexed="true" />
> ...
> 
> ...
> id{code}Then I have added updateRequestProcessorChain 
> to my solrconfig:{code}
> 
> id
> 
> 
> {code}And made it the default for the 
> UpdateRequestHandler:{code}
>  
>   uuid
>  
> {code}
> Adding new documents with null/absent id works fine as from web-interface of 
> one of the replicas, as when using SOLR in standalone mode (non-cloud) from 
> my application. Although when only I'm using SolrCloud and add document from 
> my application (using CloudSolrClient from SolrJ) it fails with 
> "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id"
> All other operations like ping or search for documents work fine in either 
> mode (standalone or cloud).
> INVESTIGATION (i.e. more details):
> In standalone mode obviously update request is:{code}POST 
> standalone_host:port/solr/collection_name/update?wt=json{code}
> In SOLR cloud mode, when adding document from one replica's web interface, 
> update request is (found through inspecting the call made by web interface): 
> {code}POST 
> replica_host:port/solr/collection_name_shard1_replica_1/update?wt=json{code}
> In both these cases payload is something like:{code}{
> "add": {
> "doc": {
>  .
> },
> "boost": 1.0,
> "overwrite": true,
> "commitWithin": 1000
> }
> }{code}
> In case when CloudSolrClient is used, the following happens (found through 
> debugging):
> Using ZK and some logic, URL list of replicas is constructed that looks like 
> this:{code}[http://replica_1_host:port/solr/collection_name/,
>  http://replica_2_host:port/solr/collection_name/,
>  

RE: Index partition corrupted during a regular flush due to FileNotFoundException on DEL file

2016-09-12 Thread 郑文兴
Thanks to Erick. I will check the disk space first.

 

So you mean if there is no more than 10G free space, Lucene/Solr will delete 
some files to save the disk space? Or it will cause the misbehave of 
Lucene/Solr?

 

Please note that we have several shards/partitions under the same root 
directory, so which way the following is true to us. Let's assume we have 2 
partitions, and A -> 10G, B->10G

l  do we have to make sure there are at least 20G disk space available?

l  Or we just need to make sure there are at least 10G disk space available?

 

Best,

Wenxing

 

-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: Monday, September 12, 2016 10:59 PM
To: dev@lucene.apache.org
Subject: Re: Index partition corrupted during a regular flush due to 
FileNotFoundException on DEL file

 

The del file should be present for each segment assuming it has any documents 
that have been updated or deleted.

 

Of course if some process external to Solr removed it, you'd get this error.

 

A less common reason is that your disk is full. Solr/Lucene require that you 
have at least as much free space on your disk as the index occupies. Thus if 
you have 10G total disk space used up by your index, you must have at least 10G 
free space, is it possible that you're running without enough disk space?

 

If anything like that is the case you should see errors in your Solr logs, 
assuming they haven't been rolled over. Is there anything suspicious there? 
Look for ERROR (all caps) and/or "Caused by" as a start.

 

Best,

Erick

 

On Mon, Sep 12, 2016 at 3:31 AM, 郑文兴 <  
zhen...@csdn.net> wrote:

> Dear all,

> 

> 

> 

> Today we found one of our index partitions was corrupted during the 

> regular flush, due to the FileNotFoundException on a del file. The 

> followings were the call stacks from the corresponding exception:

> 

> 

> 

> [2016-09-12 16:40:01,801][ERROR][qtp2107666786-40854][indexEngine ] 

> index [so_blog] commit ERROR:_oxep_7fa.del

> org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:284)

> org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:303)

> org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:

> 635)

> org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergeP

> olicy.java:611)

> org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:593

> )

> org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3587)

> org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:337

> 6)

> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:34

> 85)

> org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3467)

> org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3451)

> org.apache.lucene.index.IndexEngine.flush(IndexEngine.java:409)

> 

> 

> 

> My questions are:

> 

> l  Does anyone know the situation here? From the file system, I can’t 

> find the _oxep_7fa.del.

> 

> l  How about the life cycle of the del file?

> 

> 

> 

> Note: The Lucene Core is on 3.6.2.

> 

> 

> 

> Appreciated for your kindly advice.

> 

> Best Regards, Wenxing

 

-

To unsubscribe, e-mail:   
dev-unsubscr...@lucene.apache.org For additional commands, e-mail:  
 dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-09-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485839#comment-15485839
 ] 

Shawn Heisey commented on SOLR-8097:


bq. the Builder could be extended instead of extending the constructor itself

I tried to extend the client in this way, adding a new setting to the derived 
client class and exposing it in the derived Builder, but ultimately it comes 
down to the same problem -- the "all parameters" constructor, which is the only 
one that will survive the transition to 7.0, cannot be used in an extended 
Client/Builder because it's private.

One option, which I would not want to employ because it would involve duplicate 
code that will quickly become stale, is to copy all the code in the private 
constructor and paste it into the subclass, then add parameters as required and 
use that constructor in a derived Builder class.

IMHO, the only sane option for experienced developers is to change the internal 
constructor from private to protected, allowing derivative classes to utilize 
it after doing class-specific setup.  The developer will usually also need to 
extend the internal Builder class to expose configuration of any new capability.

I like what we've done with the Builder, and I agree that after 7.0 removes 
deprecated code, the constructor should not be public ... but making it private 
is too limiting.


> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9493) uniqueKey generation fails if content POSTed as "application/javabin" and uniqueKey field comes as NULL (as opposed to not coming at all).

2016-09-12 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485792#comment-15485792
 ] 

Alexandre Rafalovitch commented on SOLR-9493:
-

Try adding 
[RemoveBlankFieldUpdateProcessorFactory|http://www.solr-start.com/javadoc/solr-lucene/org/apache/solr/update/processor/RemoveBlankFieldUpdateProcessorFactory.html]
 into the chain before your UUIDUpdateProcessorFactory. 

This should remove the empty field and then key generator can do its job.

> uniqueKey generation fails if content POSTed as "application/javabin" and 
> uniqueKey field comes as NULL (as opposed to not coming at all).
> --
>
> Key: SOLR-9493
> URL: https://issues.apache.org/jira/browse/SOLR-9493
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yury Kartsev
> Attachments: 200.png, 400.png, Screen Shot 2016-09-11 at 16.29.50 
> .png, SolrInputDoc_contents.png, SolrInputDoc_headers.png
>
>
> I have faced a weird issue when the same application code (using SolrJ) fails 
> indexing a document without a unique key (should be auto-generated by SOLR) 
> in SolrCloud and succeeds indexing it in standalone SOLR instance (or even in 
> cloud mode, but from web interface of one of the replicas). Difference is 
> obviously only between clients (CloudSolrClient vs HttpSolrClient) and SOLR 
> URLs (Zokeeper hostname+port vs standalone SOLR instance hostname and port). 
> Failure is seen as "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id".
> I am using SOLR 5.1. In cloud mode I have 1 shard and 3 replicas.
> After lot of debugging and investigation (see below as well as my 
> [StackOverflow 
> post|http://stackoverflow.com/questions/39401792/uniquekey-generation-does-not-work-in-solrcloud-but-works-if-standalone])
>  I came to a conclusion that the difference in failing and succeeding calls 
> is simply content type of the POSTing requests. Local proxy clearly shows 
> that the request fails if content is sent as "application/javabin" (see 
> attached screenshot with sensitive data removed) and succeeds if content sent 
> as "application/xml; charset=UTF-8"  (see attached screenshot with sensitive 
> data removed).
> Would you be able to please assist?
> Thank you very much in advance!
> 
> Copying whole description and investigation here as well:
> 
> [Documentation|https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements]
>  states:{quote}Schema defaults and copyFields cannot be used to populate the 
> uniqueKey field. You can use UUIDUpdateProcessorFactory to have uniqueKey 
> values generated automatically.{quote}
> Therefore I have added my uniqueKey field to the schema:{code} name="uuid" class="solr.UUIDField" indexed="true" />
> ...
> 
> ...
> id{code}Then I have added updateRequestProcessorChain 
> to my solrconfig:{code}
> 
> id
> 
> 
> {code}And made it the default for the 
> UpdateRequestHandler:{code}
>  
>   uuid
>  
> {code}
> Adding new documents with null/absent id works fine as from web-interface of 
> one of the replicas, as when using SOLR in standalone mode (non-cloud) from 
> my application. Although when only I'm using SolrCloud and add document from 
> my application (using CloudSolrClient from SolrJ) it fails with 
> "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id"
> All other operations like ping or search for documents work fine in either 
> mode (standalone or cloud).
> INVESTIGATION (i.e. more details):
> In standalone mode obviously update request is:{code}POST 
> standalone_host:port/solr/collection_name/update?wt=json{code}
> In SOLR cloud mode, when adding document from one replica's web interface, 
> update request is (found through inspecting the call made by web interface): 
> {code}POST 
> replica_host:port/solr/collection_name_shard1_replica_1/update?wt=json{code}
> In both these cases payload is something like:{code}{
> "add": {
> "doc": {
>  .
> },
> "boost": 1.0,
> "overwrite": true,
> "commitWithin": 1000
> }
> }{code}
> In case when CloudSolrClient is used, the following happens (found through 
> debugging):
> Using ZK and some logic, URL list of replicas is constructed that looks like 
> this:{code}[http://replica_1_host:port/solr/collection_name/,
>  http://replica_2_host:port/solr/collection_name/,
>  

[jira] [Commented] (SOLR-9497) HttpSolrClient.Builder Returns Unusable Connection

2016-09-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485760#comment-15485760
 ] 

Shawn Heisey commented on SOLR-9497:


bq. Type 'org/apache/http/impl/client/SystemDefaultHttpClient' (current frame, 
stack\[0\]) is not assignable to 
'org/apache/http/impl/client/CloseableHttpClient' (from method signature)

The SystemDefaultHttpClient class was introduced in HttpClient 4.2, and 
subsequently deprecated in HttpClient 4.3.  The CloseableHttpClient class was 
introduced in HttpClient 4.3.  As of 6.2.0, SolrJ's source code uses *both* 
SystemDefaultHttpClient and CloseableHttpClient.  SystemDefaultHttpClient is a 
derivative class descending from CloseableHttpClient, and SolrJ 6.2.0 uses this 
inheritance when passing objects.

My best guess (which I admit could be wrong) is that you've got HttpClient 
4.2.x jars on your classpath, either as a version-specific dependency from 
something else in your POM, or from jars being loaded when your application 
starts.  HttpClient 4.2 will not know about CloseableHttpClient, and probably 
would result in the error you are seeing.

Using a Builder call just like yours, with a project classpath that includes 
SolrJ 6.2.0 and is *known* to be clean, I have no issues. Based on what I have 
seen so far, this is NOT a Solr problem.  It is a problem with your specific 
development or execution environment.

As Erick already mentioned, this question belongs on the solr-user mailing 
list, or in the #solr IRC channel.  It does not belong in Jira until the 
problem has been investigated and determined to be a bug.

If you want to continue discussing this beyond this reply, please take the 
discussion to the mailng list or IRC channel.

> HttpSolrClient.Builder Returns Unusable Connection
> --
>
> Key: SOLR-9497
> URL: https://issues.apache.org/jira/browse/SOLR-9497
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 6.2
> Environment: Java 1.8 Mac OSX
>Reporter: Will McGinnis
>  Labels: SolrJ
> Fix For: 6.1.1
>
>
> SolrClient solr = new HttpSolrClient.Builder(urlString).build();
> Exception in thread "main" java.lang.VerifyError: Bad return type
> Exception Details:
>   Location:
>  
> org/apache/solr/client/solrj/impl/HttpClientUtil.createClient(Lorg/apache/solr/common/params/SolrParams;Lorg/apache/http/conn/ClientConnectionManager;)Lorg/apache/http/impl/client/CloseableHttpClient;
>  @58: areturn
>   Reason:
> Type 'org/apache/http/impl/client/DefaultHttpClient' (current frame, 
> stack[0]) is not assignable to 
> 'org/apache/http/impl/client/CloseableHttpClient' (from method signature)
>   Current Frame:
> bci: @58
> flags: { }
> locals: { 'org/apache/solr/common/params/SolrParams', 
> 'org/apache/http/conn/ClientConnectionManager', 
> 'org/apache/solr/common/params/ModifiableSolrParams', 
> 'org/apache/http/impl/client/DefaultHttpClient' }
> stack: { 'org/apache/http/impl/client/DefaultHttpClient' }
>   Bytecode:
> 0x000: bb00 0359 2ab7 0004 4db2 0005 b900 0601
> 0x010: 0099 001e b200 05bb 0007 59b7 0008 1209
> 0x020: b600 0a2c b600 0bb6 000c b900 0d02 002b
> 0x030: b800 104e 2d2c b800 0f2d b0
>   Stackmap Table:
> append_frame(@47,Object[#143])
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.(HttpSolrClient.java:209)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient$Builder.build(HttpSolrClient.java:874)
> I have tried upgrading to httpclient-4.5.2. This appears to create the same 
> problem. For now, I use this deprecated, connection code.
> return new HttpSolrClient(urlString, new SystemDefaultHttpClient());
> Eventually, this hangs the Solr server, because you run out of file handles.
> I suspect calling solrClient.close() is doing nothing.
> I tried not closing and using a static connection to Solr.
> This results in basically, the same problem of, eventually hanging the Solr 
> server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7425) poll-mirrors.pl requires additional perl packages?

2016-09-12 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-7425.

Resolution: Fixed

> poll-mirrors.pl requires additional perl packages?
> --
>
> Key: LUCENE-7425
> URL: https://issues.apache.org/jira/browse/LUCENE-7425
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7425-add-path-and-details-options.patch, 
> LUCENE-7425-add-path-and-details-options.patch, LUCENE-7425.patch
>
>
> I have a newish Ubuntu 16.04.1 install ... and I'm doing the Lucene/Solr 
> 6.2.0 release on it.
> Our release process is already hard enough.
> When I get to the step to poll the mirrors to see whether Maven central and 
> the apache mirrors have the release bits yet, I hit this:
> {noformat}
> 14:51 $ perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> Can't locate LWP/UserAgent.pm in @INC (you may need to install the 
> LWP::UserAgent module) (@INC contains: /etc/perl 
> /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1 
> /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 
> /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 
> /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at 
> ../dev-tools/scripts/poll-mirrors.pl line 31.
> BEGIN failed--compilation aborted at ../dev-tools/scripts/poll-mirrors.pl 
> line 31.
> {noformat}
> How can it be that such a trivial script would need optional perl packages 
> installed?  It seems all it's trying to do is download stuff over HTTP at 
> this point?
> So I fire up {{cpan}}, asking it to install {{LWP/UserAgent.pm}} and it hits 
> all sorts of errors that I cannot understand.
> Can we somehow simplify this script to use mere mortal perl packages?  Or is 
> something badly wrong with my Ubuntu install?  Maybe we should rewrite this 
> in a proper scripting language that has batteries included and also starts 
> with P ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1115 - Failure

2016-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1115/

4 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:38503/ff_/i

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:38503/ff_/i
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:619)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:261)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:250)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:400)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:477)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:180)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-9497) HttpSolrClient.Builder Returns Unusable Connection

2016-09-12 Thread Will McGinnis (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485579#comment-15485579
 ] 

Will McGinnis commented on SOLR-9497:
-

I removed solr-core 



return new HttpSolrClient.Builder(urlString).build(); 



Exception in thread "main" java.lang.VerifyError: Bad return type 

Exception Details: 

Location: 

org/apache/solr/client/solrj/impl/HttpClientUtil.createClient(Lorg/apache/solr/common/params/SolrParams;)Lorg/apache/http/impl/client/CloseableHttpClient;
 @57: areturn 

Reason: 

Type 'org/apache/http/impl/client/SystemDefaultHttpClient' (current frame, 
stack[0]) is not assignable to 
'org/apache/http/impl/client/CloseableHttpClient' (from method signature) 

Current Frame: 

bci: @57 

flags: { } 

locals: { 'org/apache/solr/common/params/SolrParams', 
'org/apache/solr/common/params/ModifiableSolrParams', 
'org/apache/http/impl/client/SystemDefaultHttpClient' } 

stack: { 'org/apache/http/impl/client/SystemDefaultHttpClient' } 

Bytecode: 

0x000: bb00 0359 2ab7 0004 4cb2 0005 b900 0601 

0x010: 0099 001e b200 05bb 0007 59b7 0008 1209 

0x020: b600 0a2b b600 0bb6 000c b900 0d02 00b8 

0x030: 000e 4d2c 2bb8 000f 2cb0 

Stackmap Table: 

append_frame(@47,Object[#143]) 




at org.apache.solr.client.solrj.impl.HttpSolrClient.( 
HttpSolrClient.java:209 ) 

at org.apache.solr.client.solrj.impl.HttpSolrClient$Builder.build( 
HttpSolrClient.java:874 ) 


> HttpSolrClient.Builder Returns Unusable Connection
> --
>
> Key: SOLR-9497
> URL: https://issues.apache.org/jira/browse/SOLR-9497
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 6.2
> Environment: Java 1.8 Mac OSX
>Reporter: Will McGinnis
>  Labels: SolrJ
> Fix For: 6.1.1
>
>
> SolrClient solr = new HttpSolrClient.Builder(urlString).build();
> Exception in thread "main" java.lang.VerifyError: Bad return type
> Exception Details:
>   Location:
>  
> org/apache/solr/client/solrj/impl/HttpClientUtil.createClient(Lorg/apache/solr/common/params/SolrParams;Lorg/apache/http/conn/ClientConnectionManager;)Lorg/apache/http/impl/client/CloseableHttpClient;
>  @58: areturn
>   Reason:
> Type 'org/apache/http/impl/client/DefaultHttpClient' (current frame, 
> stack[0]) is not assignable to 
> 'org/apache/http/impl/client/CloseableHttpClient' (from method signature)
>   Current Frame:
> bci: @58
> flags: { }
> locals: { 'org/apache/solr/common/params/SolrParams', 
> 'org/apache/http/conn/ClientConnectionManager', 
> 'org/apache/solr/common/params/ModifiableSolrParams', 
> 'org/apache/http/impl/client/DefaultHttpClient' }
> stack: { 'org/apache/http/impl/client/DefaultHttpClient' }
>   Bytecode:
> 0x000: bb00 0359 2ab7 0004 4db2 0005 b900 0601
> 0x010: 0099 001e b200 05bb 0007 59b7 0008 1209
> 0x020: b600 0a2c b600 0bb6 000c b900 0d02 002b
> 0x030: b800 104e 2d2c b800 0f2d b0
>   Stackmap Table:
> append_frame(@47,Object[#143])
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.(HttpSolrClient.java:209)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient$Builder.build(HttpSolrClient.java:874)
> I have tried upgrading to httpclient-4.5.2. This appears to create the same 
> problem. For now, I use this deprecated, connection code.
> return new HttpSolrClient(urlString, new SystemDefaultHttpClient());
> Eventually, this hangs the Solr server, because you run out of file handles.
> I suspect calling solrClient.close() is doing nothing.
> I tried not closing and using a static connection to Solr.
> This results in basically, the same problem of, eventually hanging the Solr 
> server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-09-12 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485554#comment-15485554
 ] 

Anshum Gupta commented on SOLR-8097:


Right, we can open up the constructor for subclassing but I can't figure the 
need. I may be missing something here but the Builder could be extended instead 
of extending the constructor itself and I think that's the right way to go 
considering we'd be doing away with access to the constructors in 7.0 anyways.

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-09-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-8097.

Resolution: Fixed

Marking this as resolved to avoid confusion.

P.S: That only means that this was committed and released already. It's still 
open for fixing as part of another issue though.

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
> Fix For: master (7.0), 6.1
>
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-09-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-8097:
---
Fix Version/s: (was: 6.2)
   6.1

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9497) HttpSolrClient.Builder Returns Unusable Connection

2016-09-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485491#comment-15485491
 ] 

Shawn Heisey commented on SOLR-9497:


bq. < artifactId > solr -core 

Why are you including solr-core if your code is using HttpSolrClient?  You only 
need solr-core if you want to actually include a full Solr server in your 
program -- which is what the EmbeddedSolrServer class does.

https://lucene.apache.org/solr/6_2_0/solr-core/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.html

Removing that probably isn't going to fix the problem, but it does look like a 
completely unnecessary dependency to me.

The error you have included here sounds like you have a very old HttpClient 
version (probably 3.x) on your classpath, in addition to the 4.4.x version of 
HttpClient jars that SolrJ 6.2 is including.

It may be worthwhile to include the entire Maven POM file for your project.  
Any extra HttpClient jars may not be coming from the dependencies included by 
Maven in this particular project, they may have ended up on your classpath from 
other sources.

> HttpSolrClient.Builder Returns Unusable Connection
> --
>
> Key: SOLR-9497
> URL: https://issues.apache.org/jira/browse/SOLR-9497
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 6.2
> Environment: Java 1.8 Mac OSX
>Reporter: Will McGinnis
>  Labels: SolrJ
> Fix For: 6.1.1
>
>
> SolrClient solr = new HttpSolrClient.Builder(urlString).build();
> Exception in thread "main" java.lang.VerifyError: Bad return type
> Exception Details:
>   Location:
>  
> org/apache/solr/client/solrj/impl/HttpClientUtil.createClient(Lorg/apache/solr/common/params/SolrParams;Lorg/apache/http/conn/ClientConnectionManager;)Lorg/apache/http/impl/client/CloseableHttpClient;
>  @58: areturn
>   Reason:
> Type 'org/apache/http/impl/client/DefaultHttpClient' (current frame, 
> stack[0]) is not assignable to 
> 'org/apache/http/impl/client/CloseableHttpClient' (from method signature)
>   Current Frame:
> bci: @58
> flags: { }
> locals: { 'org/apache/solr/common/params/SolrParams', 
> 'org/apache/http/conn/ClientConnectionManager', 
> 'org/apache/solr/common/params/ModifiableSolrParams', 
> 'org/apache/http/impl/client/DefaultHttpClient' }
> stack: { 'org/apache/http/impl/client/DefaultHttpClient' }
>   Bytecode:
> 0x000: bb00 0359 2ab7 0004 4db2 0005 b900 0601
> 0x010: 0099 001e b200 05bb 0007 59b7 0008 1209
> 0x020: b600 0a2c b600 0bb6 000c b900 0d02 002b
> 0x030: b800 104e 2d2c b800 0f2d b0
>   Stackmap Table:
> append_frame(@47,Object[#143])
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.(HttpSolrClient.java:209)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient$Builder.build(HttpSolrClient.java:874)
> I have tried upgrading to httpclient-4.5.2. This appears to create the same 
> problem. For now, I use this deprecated, connection code.
> return new HttpSolrClient(urlString, new SystemDefaultHttpClient());
> Eventually, this hangs the Solr server, because you run out of file handles.
> I suspect calling solrClient.close() is doing nothing.
> I tried not closing and using a static connection to Solr.
> This results in basically, the same problem of, eventually hanging the Solr 
> server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 842 - Still Unstable!

2016-09-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/842/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"/y_g/ad", "path":"/test1", 
"httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":null},  from 
server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val' for path 'x' 
full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"/y_g/ad",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":null},  from server:  null
at 
__randomizedtesting.SeedInfo.seed([BA1D201B936A3587:62500D4C64B79027]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:535)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:232)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  

[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.8.0_102) - Build # 430 - Still Unstable!

2016-09-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/430/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestSolrConfigHandlerCloud: 1) Thread[id=4420, 
name=Thread-1996, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud]   
  at java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:355) 
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
 at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:70)
 at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108)
 at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79)   
  at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:920) 
at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2623) at 
org.apache.solr.cloud.ZkController$5.run(ZkController.java:2480)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestSolrConfigHandlerCloud: 
   1) Thread[id=4420, name=Thread-1996, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:355)
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:70)
at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108)
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79)
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:920)
at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2623)
at org.apache.solr.cloud.ZkController$5.run(ZkController.java:2480)
at __randomizedtesting.SeedInfo.seed([64370980C7089CB8]:0)




Build Log:
[...truncated 11173 lines...]
   [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerCloud
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-5.5-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestSolrConfigHandlerCloud_64370980C7089CB8-001/init-core-data-001
   [junit4]   2> 435741 INFO  
(SUITE-TestSolrConfigHandlerCloud-seed#[64370980C7089CB8]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 435741 INFO  
(SUITE-TestSolrConfigHandlerCloud-seed#[64370980C7089CB8]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 435743 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[64370980C7089CB8]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 435744 INFO  (Thread-1916) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 435744 INFO  (Thread-1916) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 435844 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[64370980C7089CB8]) [] 
o.a.s.c.ZkTestServer start zk server on port:35061
   [junit4]   2> 435845 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[64370980C7089CB8]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 435845 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[64370980C7089CB8]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 435847 INFO  (zkCallback-462-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@a76fbf name:ZooKeeperConnection 
Watcher:127.0.0.1:35061 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 435847 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[64370980C7089CB8]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 435847 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[64370980C7089CB8]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 435847 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[64370980C7089CB8]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 435848 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[64370980C7089CB8]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 435848 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[64370980C7089CB8]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   

[jira] [Commented] (LUCENE-7432) TestIndexWriterOnError.testCheckpoint fails on IBM J9

2016-09-12 Thread Kevin Langman (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485403#comment-15485403
 ] 

Kevin Langman commented on LUCENE-7432:
---

Ah... I tried using IBM Java 8.0.3.10 and the RuntimeException goes away.. Not 
sure what fixed this.

> TestIndexWriterOnError.testCheckpoint fails on IBM J9
> -
>
> Key: LUCENE-7432
> URL: https://issues.apache.org/jira/browse/LUCENE-7432
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>  Labels: IBM-J9
>
> Not sure if this is a J9 issue or a Lucene issue, but using this version of 
> J9:
> {noformat}
> 09:26 $ java -version
> java version "1.8.0"
> Java(TM) SE Runtime Environment (build pxa6480sr3fp10-20160720_02(SR3fp10))
> IBM J9 VM (build 2.8, JRE 1.8.0 Linux amd64-64 Compressed References 
> 20160719_312156 (JIT enabled, AOT enabled)
> J9VM - R28_Java8_SR3_20160719_1144_B312156
> JIT  - tr.r14.java_20160629_120284.01
> GC   - R28_Java8_SR3_20160719_1144_B312156_CMPRSS
> J9CL - 20160719_312156)
> JCL - 20160719_01 based on Oracle jdk8u101-b13
> {noformat}
> This test failure seems to reproduce:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterOnVMError -Dtests.method=testCheckpoint 
> -Dtests.seed=FAB0DC147AFDBF4E -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.locale=kn -Dtests.timezone=Australia/South -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR196s | TestIndexWriterOnVMError.testCheckpoint <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: 
> MockDirectoryWrapper: cannot close: there are still 9 open files: 
> {_2_Asserting_0.pos=1, _2_Asserting_0.dvd=1, _2.fdt=1, _2_Asserting_0.doc=1, 
> _2_Asserting_0.tim=1, _2.nvd=1, _2.tvd=1, _3.cfs=1, _2.dim=1}
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([FAB0DC147AFDBF4E:FBA18A7C5B16548D]:0)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841)
>[junit4]>  at 
> org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:89)
>[junit4]>  at 
> org.apache.lucene.index.TestIndexWriterOnVMError.testCheckpoint(TestIndexWriterOnVMError.java:280)
>[junit4]>  at java.lang.Thread.run(Thread.java:785)
>[junit4]> Caused by: java.lang.RuntimeException: unclosed IndexInput: 
> _2.dim
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.(Lucene60PointsReader.java:85)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsFormat.fieldsReader(Lucene60PointsFormat.java:104)
>[junit4]>  at 
> org.apache.lucene.codecs.asserting.AssertingPointsFormat.fieldsReader(AssertingPointsFormat.java:66)
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:128)
>[junit4]>  at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:74)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197)
>[junit4]>  at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:460)
>[junit4]>  at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103)
>[junit4]>  at 
> org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:175)
>[junit4]>  ... 37 more
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /l/trunk/lucene/build/core/test/J0/temp/lucene.index.TestIndexWriterOnVMError_FAB0DC147AFDBF4E-001
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62), 
> sim=ClassicSimilarity, locale=kn, timezone=Australia/South
>[junit4]   2> NOTE: Linux 4.4.0-34-generic amd64/IBM Corporation 1.8.0 
> (64-bit)/cpus=8,threads=1,free=55483576,total=76742656
>[junit4]   2> NOTE: All tests run in this JVM: [TestIndexWriterOnVMError]
> {noformat}
> The test is quite stressful, provoking "unexpected" exceptions at tricky 
> times for {{IndexWriter}}.
> When I run with Oracle's 1.8.0_101 with that same "reproduce with", the test 
> passes.
> I see a similar failure for {{testUnknownError}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To 

[jira] [Commented] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-09-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485336#comment-15485336
 ] 

Shawn Heisey commented on SOLR-8097:


What is your *specific* goal in creating a subclass?  I ask because the X is 
usually more important than the Y.  See this:

http://people.apache.org/~hossman/#xyproblem

A SolrClient object is a complex thing, particularly the Cloud version.  
Although we try to keep the public API from changing much in minor releases, 
the internal implementation is a VERY different story.  Because the 
implementation can change dramatically from release to release, certain details 
are kept private.  This reduces the risk of breaking user code.

That said, there really is no reason we should *prevent* subclassing like we 
currently do, even if we recommend not doing it because it makes user code 
brittle.

It makes sense to change the kitchen-sink constructor from private to 
protected.  SOLR-8975 might be a good place to tackle this, but it might need 
its own issue.

I think we should also recommend extending the Builder when subclassing.  When 
7.0 is released, all public constructors will be gone, and the Builder will be 
the *only* way to create a client object.

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7439) Should FuzzyQuery match short terms too?

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485333#comment-15485333
 ] 

ASF subversion and git services commented on LUCENE-7439:
-

Commit faf3bc3134c6e5ba3e2caa15762524872e083152 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=faf3bc3 ]

LUCENE-7439: clean up FuzzyQuery/FuzzyTermsEnum sources


> Should FuzzyQuery match short terms too?
> 
>
> Key: LUCENE-7439
> URL: https://issues.apache.org/jira/browse/LUCENE-7439
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7439.patch, LUCENE-7439.patch
>
>
> Today, if you ask {{FuzzyQuery}} to match {{abcd}} with edit distance 2, it 
> will fail to match the term {{ab}} even though it's 2 edits away.
> Its javadocs explain this:
> {noformat}
>  * NOTE: terms of length 1 or 2 will sometimes not match because of how 
> the scaled
>  * distance between two terms is computed.  For a term to match, the edit 
> distance between
>  * the terms must be less than the minimum length term (either the input 
> term, or
>  * the candidate term).  For example, FuzzyQuery on term "abcd" with 
> maxEdits=2 will
>  * not match an indexed term "ab", and FuzzyQuery on term "a" with maxEdits=2 
> will not
>  * match an indexed term "abc".
> {noformat}
> On the one hand, I can see that this behavior is sort of justified in that 
> 50% of the characters are different and so this is a very "weak" match, but 
> on the other hand, it's quite unexpected since edit distance is such an exact 
> measure so the terms should have matched.
> It seems like the behavior is caused by internal implementation details about 
> how the relative (floating point) score is computed.  I think we should fix 
> it, so that edit distance 2 does in fact match all terms with edit distance 
> <= 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9497) HttpSolrClient.Builder Returns Unusable Connection

2016-09-12 Thread Will McGinnis (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485312#comment-15485312
 ] 

Will McGinnis commented on SOLR-9497:
-



private static HttpSolrClient connectSolrClient() { 

return new HttpSolrClient.Builder(urlString).build(); 

} 




< dependency > 

< groupId > org.apache.solr  

< artifactId > solr - solrj  

< version > 6.2.0  

 

< dependency > 

< groupId > org.apache.solr  

< artifactId > solr -core  

< version > 6.2.0  

 




I also tried this: 

< dependency > 

< groupId > org.apache.httpcomponents  

< artifactId > httpclient  

< version > 4.5.2  

 




What else do you need? 

Thank you 

Will McGinnis 





> HttpSolrClient.Builder Returns Unusable Connection
> --
>
> Key: SOLR-9497
> URL: https://issues.apache.org/jira/browse/SOLR-9497
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 6.2
> Environment: Java 1.8 Mac OSX
>Reporter: Will McGinnis
>  Labels: SolrJ
> Fix For: 6.1.1
>
>
> SolrClient solr = new HttpSolrClient.Builder(urlString).build();
> Exception in thread "main" java.lang.VerifyError: Bad return type
> Exception Details:
>   Location:
>  
> org/apache/solr/client/solrj/impl/HttpClientUtil.createClient(Lorg/apache/solr/common/params/SolrParams;Lorg/apache/http/conn/ClientConnectionManager;)Lorg/apache/http/impl/client/CloseableHttpClient;
>  @58: areturn
>   Reason:
> Type 'org/apache/http/impl/client/DefaultHttpClient' (current frame, 
> stack[0]) is not assignable to 
> 'org/apache/http/impl/client/CloseableHttpClient' (from method signature)
>   Current Frame:
> bci: @58
> flags: { }
> locals: { 'org/apache/solr/common/params/SolrParams', 
> 'org/apache/http/conn/ClientConnectionManager', 
> 'org/apache/solr/common/params/ModifiableSolrParams', 
> 'org/apache/http/impl/client/DefaultHttpClient' }
> stack: { 'org/apache/http/impl/client/DefaultHttpClient' }
>   Bytecode:
> 0x000: bb00 0359 2ab7 0004 4db2 0005 b900 0601
> 0x010: 0099 001e b200 05bb 0007 59b7 0008 1209
> 0x020: b600 0a2c b600 0bb6 000c b900 0d02 002b
> 0x030: b800 104e 2d2c b800 0f2d b0
>   Stackmap Table:
> append_frame(@47,Object[#143])
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.(HttpSolrClient.java:209)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient$Builder.build(HttpSolrClient.java:874)
> I have tried upgrading to httpclient-4.5.2. This appears to create the same 
> problem. For now, I use this deprecated, connection code.
> return new HttpSolrClient(urlString, new SystemDefaultHttpClient());
> Eventually, this hangs the Solr server, because you run out of file handles.
> I suspect calling solrClient.close() is doing nothing.
> I tried not closing and using a static connection to Solr.
> This results in basically, the same problem of, eventually hanging the Solr 
> server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+134) - Build # 17808 - Unstable!

2016-09-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17808/
Java: 32bit/jdk-9-ea+134 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:44477/c8n_1x3_lf_shard1_replica1]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:44477/c8n_1x3_lf_shard1_replica1]
at 
__randomizedtesting.SeedInfo.seed([7708C85C32BB768:8F24B35F6DD7DA90]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:769)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1161)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1050)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:992)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:178)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-9504) A replica with an empty index becomes the leader even when other more qualified replicas are in line

2016-09-12 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485100#comment-15485100
 ] 

Shalin Shekhar Mangar commented on SOLR-9504:
-

FYI [~markrmil...@gmail.com], [~ysee...@gmail.com]

> A replica with an empty index becomes the leader even when other more 
> qualified replicas are in line
> 
>
> Key: SOLR-9504
> URL: https://issues.apache.org/jira/browse/SOLR-9504
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (7.0)
>Reporter: Shalin Shekhar Mangar
>Priority: Critical
>  Labels: impact-high
> Fix For: 6.3, master (7.0)
>
>
> I haven't tried branch_6x or any release yet. But this is trivially 
> reproducible on master with the following steps:
> # Start two solr nodes
> # Create a collection with 1 shard, 1 replica so that one node is empty.
> # Index some documents
> # Shutdown the leader node
> # Use addreplica API to create a replica of the collection on the 
> still-running node. For some reason this API hangs until you restart the 
> other node (possibly a bug itself) but do not wait for the API to complete.
> # Restart the former leader node
> You'll find that the replica with 0 docs has become the leader. The former 
> leader recovers from the leader without replicating any index files. It still 
> has the old index which has some docs.
> This is from the logs of the 0 doc replica:
> {code}
> 713102 INFO  (zkCallback-4-thread-5-processing-n:127.0.1.1:7574_solr) [   ] 
> o.a.s.c.c.ZkStateReader Updating data for [gettingstarted] from [9] to [10]
> 714377 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
> x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContext Enough 
> replicas found to continue.
> 714377 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
> x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContext I may be 
> the new leader - try and sync
> 714377 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
> x:gettingstarted_shard1_replica2] o.a.s.c.SyncStrategy Sync replicas to 
> http://127.0.1.1:7574/solr/gettingstarted_shard1_replica2/
> 714380 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
> x:gettingstarted_shard1_replica2] o.a.s.u.PeerSync PeerSync: 
> core=gettingstarted_shard1_replica2 url=http://127.0.1.1:7574/solr START 
> replicas=[http://127.0.1.1:8983/solr/gettingstarted_shard1_replica1/] 
> nUpdates=100
> 714381 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
> x:gettingstarted_shard1_replica2] o.a.s.u.PeerSync PeerSync: 
> core=gettingstarted_shard1_replica2 url=http://127.0.1.1:7574/solr DONE.  We 
> have no versions.  sync failed.
> 714382 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
> x:gettingstarted_shard1_replica2] o.a.s.c.SyncStrategy Leader's attempt to 
> sync with shard failed, moving to the next candidate
> 714382 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
> x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContext We 
> failed sync, but we have no versions - we can't sync in that case - we were 
> active before, so become leader anyway
> 714387 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
> x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContextBase 
> Creating leader registration node 
> /collections/gettingstarted/leaders/shard1/leader after winning as 
> /collections/gettingstarted/leader_elect/shard1/election/96579592334475268-core_node2-n_01
> 714398 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
> x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContext I am the 
> new leader: http://127.0.1.1:7574/solr/gettingstarted_shard1_replica2/ shard1
> {code}
> It basically tries to sync but has no versions and because it was active 
> before (it is a new core starting up for the first time), it becomes the 
> leader and publishes itself as active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7425) poll-mirrors.pl requires additional perl packages?

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485099#comment-15485099
 ] 

ASF subversion and git services commented on LUCENE-7425:
-

Commit ab5afedd55340c6d332131ca66c32cbd24508fbe in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ab5afed ]

LUCENE-7425: Port -path and -details options from the Perl version, and a 
couple other minor cleanups


> poll-mirrors.pl requires additional perl packages?
> --
>
> Key: LUCENE-7425
> URL: https://issues.apache.org/jira/browse/LUCENE-7425
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7425-add-path-and-details-options.patch, 
> LUCENE-7425-add-path-and-details-options.patch, LUCENE-7425.patch
>
>
> I have a newish Ubuntu 16.04.1 install ... and I'm doing the Lucene/Solr 
> 6.2.0 release on it.
> Our release process is already hard enough.
> When I get to the step to poll the mirrors to see whether Maven central and 
> the apache mirrors have the release bits yet, I hit this:
> {noformat}
> 14:51 $ perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> Can't locate LWP/UserAgent.pm in @INC (you may need to install the 
> LWP::UserAgent module) (@INC contains: /etc/perl 
> /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1 
> /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 
> /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 
> /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at 
> ../dev-tools/scripts/poll-mirrors.pl line 31.
> BEGIN failed--compilation aborted at ../dev-tools/scripts/poll-mirrors.pl 
> line 31.
> {noformat}
> How can it be that such a trivial script would need optional perl packages 
> installed?  It seems all it's trying to do is download stuff over HTTP at 
> this point?
> So I fire up {{cpan}}, asking it to install {{LWP/UserAgent.pm}} and it hits 
> all sorts of errors that I cannot understand.
> Can we somehow simplify this script to use mere mortal perl packages?  Or is 
> something badly wrong with my Ubuntu install?  Maybe we should rewrite this 
> in a proper scripting language that has batteries included and also starts 
> with P ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7425) poll-mirrors.pl requires additional perl packages?

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485101#comment-15485101
 ] 

ASF subversion and git services commented on LUCENE-7425:
-

Commit 541a8fa13d82c85dd2c0baab4dfda43f961decd4 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=541a8fa ]

LUCENE-7425: Port -path and -details options from the Perl version, and a 
couple other minor cleanups


> poll-mirrors.pl requires additional perl packages?
> --
>
> Key: LUCENE-7425
> URL: https://issues.apache.org/jira/browse/LUCENE-7425
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7425-add-path-and-details-options.patch, 
> LUCENE-7425-add-path-and-details-options.patch, LUCENE-7425.patch
>
>
> I have a newish Ubuntu 16.04.1 install ... and I'm doing the Lucene/Solr 
> 6.2.0 release on it.
> Our release process is already hard enough.
> When I get to the step to poll the mirrors to see whether Maven central and 
> the apache mirrors have the release bits yet, I hit this:
> {noformat}
> 14:51 $ perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> Can't locate LWP/UserAgent.pm in @INC (you may need to install the 
> LWP::UserAgent module) (@INC contains: /etc/perl 
> /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1 
> /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 
> /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 
> /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at 
> ../dev-tools/scripts/poll-mirrors.pl line 31.
> BEGIN failed--compilation aborted at ../dev-tools/scripts/poll-mirrors.pl 
> line 31.
> {noformat}
> How can it be that such a trivial script would need optional perl packages 
> installed?  It seems all it's trying to do is download stuff over HTTP at 
> this point?
> So I fire up {{cpan}}, asking it to install {{LWP/UserAgent.pm}} and it hits 
> all sorts of errors that I cannot understand.
> Can we somehow simplify this script to use mere mortal perl packages?  Or is 
> something badly wrong with my Ubuntu install?  Maybe we should rewrite this 
> in a proper scripting language that has batteries included and also starts 
> with P ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9504) A replica with an empty index becomes the leader even when other more qualified replicas are in line

2016-09-12 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-9504:
---

 Summary: A replica with an empty index becomes the leader even 
when other more qualified replicas are in line
 Key: SOLR-9504
 URL: https://issues.apache.org/jira/browse/SOLR-9504
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: master (7.0)
Reporter: Shalin Shekhar Mangar
Priority: Critical
 Fix For: 6.3, master (7.0)


I haven't tried branch_6x or any release yet. But this is trivially 
reproducible on master with the following steps:
# Start two solr nodes
# Create a collection with 1 shard, 1 replica so that one node is empty.
# Index some documents
# Shutdown the leader node
# Use addreplica API to create a replica of the collection on the still-running 
node. For some reason this API hangs until you restart the other node (possibly 
a bug itself) but do not wait for the API to complete.
# Restart the former leader node

You'll find that the replica with 0 docs has become the leader. The former 
leader recovers from the leader without replicating any index files. It still 
has the old index which has some docs.

This is from the logs of the 0 doc replica:
{code}
713102 INFO  (zkCallback-4-thread-5-processing-n:127.0.1.1:7574_solr) [   ] 
o.a.s.c.c.ZkStateReader Updating data for [gettingstarted] from [9] to [10]
714377 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContext Enough 
replicas found to continue.
714377 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContext I may be 
the new leader - try and sync
714377 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
x:gettingstarted_shard1_replica2] o.a.s.c.SyncStrategy Sync replicas to 
http://127.0.1.1:7574/solr/gettingstarted_shard1_replica2/
714380 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
x:gettingstarted_shard1_replica2] o.a.s.u.PeerSync PeerSync: 
core=gettingstarted_shard1_replica2 url=http://127.0.1.1:7574/solr START 
replicas=[http://127.0.1.1:8983/solr/gettingstarted_shard1_replica1/] 
nUpdates=100
714381 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
x:gettingstarted_shard1_replica2] o.a.s.u.PeerSync PeerSync: 
core=gettingstarted_shard1_replica2 url=http://127.0.1.1:7574/solr DONE.  We 
have no versions.  sync failed.
714382 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
x:gettingstarted_shard1_replica2] o.a.s.c.SyncStrategy Leader's attempt to sync 
with shard failed, moving to the next candidate
714382 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContext We failed 
sync, but we have no versions - we can't sync in that case - we were active 
before, so become leader anyway
714387 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContextBase 
Creating leader registration node 
/collections/gettingstarted/leaders/shard1/leader after winning as 
/collections/gettingstarted/leader_elect/shard1/election/96579592334475268-core_node2-n_01
714398 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 
x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContext I am the 
new leader: http://127.0.1.1:7574/solr/gettingstarted_shard1_replica2/ shard1
{code}

It basically tries to sync but has no versions and because it was active before 
(it is a new core starting up for the first time), it becomes the leader and 
publishes itself as active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-09-12 Thread Perrin Bignoli (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485083#comment-15485083
 ] 

Perrin Bignoli commented on SOLR-8097:
--

What was the result of that discussion?

I am interested in creating a subclass of CloudSolrClient.  I don't see how 
that is possible with the current code or if there are only private 
constructors.  Other *SolrClient classes appear to have a protected "Builder" 
constructor.  They also have external Builder classes (at least on Master).  Is 
there a reason why CloudSolrClient is set up to prevent subclassing?  Please 
let me know if I am missing something obvious.

Also, that discussion does not involve member variable visibility, although 
that is probably outside of the scope of this particular ticket.

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7425) poll-mirrors.pl requires additional perl packages?

2016-09-12 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7425:
---
Attachment: LUCENE-7425-add-path-and-details-options.patch

Thanks for testing, [~ctargett].

I tested all permutations of options (-path with/without -details, -version 
with/without -details), and with both python 3 and python 2 - everything seems 
to work.

The attached version of the patch makes one more change - I switched 
{{maven_available and ' ' or ' not '}} - looks like a submission to a 
short-form obfucated Python contest - to standard Python trinary {{' ' if 
maven_available else ' not '}} in:

{code}
p('\n\n{} is{}downloadable from Maven Central'.format(label, maven_available 
and ' ' or ' not '))
{code}

Committing shortly.

> poll-mirrors.pl requires additional perl packages?
> --
>
> Key: LUCENE-7425
> URL: https://issues.apache.org/jira/browse/LUCENE-7425
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7425-add-path-and-details-options.patch, 
> LUCENE-7425-add-path-and-details-options.patch, LUCENE-7425.patch
>
>
> I have a newish Ubuntu 16.04.1 install ... and I'm doing the Lucene/Solr 
> 6.2.0 release on it.
> Our release process is already hard enough.
> When I get to the step to poll the mirrors to see whether Maven central and 
> the apache mirrors have the release bits yet, I hit this:
> {noformat}
> 14:51 $ perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> Can't locate LWP/UserAgent.pm in @INC (you may need to install the 
> LWP::UserAgent module) (@INC contains: /etc/perl 
> /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1 
> /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 
> /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 
> /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at 
> ../dev-tools/scripts/poll-mirrors.pl line 31.
> BEGIN failed--compilation aborted at ../dev-tools/scripts/poll-mirrors.pl 
> line 31.
> {noformat}
> How can it be that such a trivial script would need optional perl packages 
> installed?  It seems all it's trying to do is download stuff over HTTP at 
> this point?
> So I fire up {{cpan}}, asking it to install {{LWP/UserAgent.pm}} and it hits 
> all sorts of errors that I cannot understand.
> Can we somehow simplify this script to use mere mortal perl packages?  Or is 
> something badly wrong with my Ubuntu install?  Maybe we should rewrite this 
> in a proper scripting language that has batteries included and also starts 
> with P ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7425) poll-mirrors.pl requires additional perl packages?

2016-09-12 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485014#comment-15485014
 ] 

Cassandra Targett commented on LUCENE-7425:
---

+1 [~steve_rowe]. I tried out the patch to poll mirrors for the Solr Ref Guide 
release, and using the path options works the same as before. Thanks.

> poll-mirrors.pl requires additional perl packages?
> --
>
> Key: LUCENE-7425
> URL: https://issues.apache.org/jira/browse/LUCENE-7425
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7425-add-path-and-details-options.patch, 
> LUCENE-7425.patch
>
>
> I have a newish Ubuntu 16.04.1 install ... and I'm doing the Lucene/Solr 
> 6.2.0 release on it.
> Our release process is already hard enough.
> When I get to the step to poll the mirrors to see whether Maven central and 
> the apache mirrors have the release bits yet, I hit this:
> {noformat}
> 14:51 $ perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> Can't locate LWP/UserAgent.pm in @INC (you may need to install the 
> LWP::UserAgent module) (@INC contains: /etc/perl 
> /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1 
> /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 
> /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 
> /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at 
> ../dev-tools/scripts/poll-mirrors.pl line 31.
> BEGIN failed--compilation aborted at ../dev-tools/scripts/poll-mirrors.pl 
> line 31.
> {noformat}
> How can it be that such a trivial script would need optional perl packages 
> installed?  It seems all it's trying to do is download stuff over HTTP at 
> this point?
> So I fire up {{cpan}}, asking it to install {{LWP/UserAgent.pm}} and it hits 
> all sorts of errors that I cannot understand.
> Can we somehow simplify this script to use mere mortal perl packages?  Or is 
> something badly wrong with my Ubuntu install?  Maybe we should rewrite this 
> in a proper scripting language that has batteries included and also starts 
> with P ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9365) Reduce noise in solr logs during graceful shutdown

2016-09-12 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-9365.
-
   Resolution: Fixed
Fix Version/s: (was: 6.2)
   6.3

I added another such check in ZkController.runLeaderProcess() method.

Thanks Dat!

> Reduce noise in solr logs during graceful shutdown
> --
>
> Key: SOLR-9365
> URL: https://issues.apache.org/jira/browse/SOLR-9365
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Cao Manh Dat
>Priority: Minor
>  Labels: difficulty-easy, impact-low, newdev
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9365.patch
>
>
> There is too much unnecessary logging of exceptions during a graceful 
> shutdown. This is mostly due to:
> # Watcher invocations fired after the zk callback executor is shutdown, and
> # Session expiry because of zkclient or embedded zk server shutdown
> We should add a simple check for such conditions to reduce noise in our logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9365) Reduce noise in solr logs during graceful shutdown

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485003#comment-15485003
 ] 

ASF subversion and git services commented on SOLR-9365:
---

Commit 47a85502085e75493576bb805d62d493c9025ed8 in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=47a8550 ]

SOLR-9365: Reduce noise in solr logs during graceful shutdown

(cherry picked from commit 3fe1486)


> Reduce noise in solr logs during graceful shutdown
> --
>
> Key: SOLR-9365
> URL: https://issues.apache.org/jira/browse/SOLR-9365
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Cao Manh Dat
>Priority: Minor
>  Labels: difficulty-easy, impact-low, newdev
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9365.patch
>
>
> There is too much unnecessary logging of exceptions during a graceful 
> shutdown. This is mostly due to:
> # Watcher invocations fired after the zk callback executor is shutdown, and
> # Session expiry because of zkclient or embedded zk server shutdown
> We should add a simple check for such conditions to reduce noise in our logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9365) Reduce noise in solr logs during graceful shutdown

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484999#comment-15484999
 ] 

ASF subversion and git services commented on SOLR-9365:
---

Commit 3fe14866838a9939a940b954fd97b8ad9be2226e in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3fe1486 ]

SOLR-9365: Reduce noise in solr logs during graceful shutdown


> Reduce noise in solr logs during graceful shutdown
> --
>
> Key: SOLR-9365
> URL: https://issues.apache.org/jira/browse/SOLR-9365
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Cao Manh Dat
>Priority: Minor
>  Labels: difficulty-easy, impact-low, newdev
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9365.patch
>
>
> There is too much unnecessary logging of exceptions during a graceful 
> shutdown. This is mostly due to:
> # Watcher invocations fired after the zk callback executor is shutdown, and
> # Session expiry because of zkclient or embedded zk server shutdown
> We should add a simple check for such conditions to reduce noise in our logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7440) Document skipping on large indexes is broken

2016-09-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484994#comment-15484994
 ] 

Yonik Seeley commented on LUCENE-7440:
--

bq. Would this be faster to test if we configure a larger top-level skip 
distance?

The top-level skip distance sort of falls out from other factors, rather than 
being explicitly configured.
For a quicker, more thorough test, It would probably be good to somehow test 
the skip list logic itself w/o having it backed by an actual index.  Even with 
that, I think it's a good idea to also test real indexes.

> Document skipping on large indexes is broken
> 
>
> Key: LUCENE-7440
> URL: https://issues.apache.org/jira/browse/LUCENE-7440
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 2.2
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Critical
> Fix For: master (7.0), 6.3, 6.2.1
>
> Attachments: LUCENE-7440.patch, LUCENE-7440.patch
>
>
> Large skips on large indexes fail.
> Anything that uses skips (such as a boolean query, filtered queries, faceted 
> queries, join queries, etc) can trigger this bug on a sufficiently large 
> index.
> The bug is a numeric overflow in MultiLevelSkipList that has been present 
> since inception (Lucene 2.2).  It may not manifest until one has a single 
> segment with more than ~1.8B documents, and a large skip is performed on that 
> segment.
> Typical stack trace on Lucene7-dev:
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 110
>   at 
> org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:297)
>   at org.apache.lucene.store.DataInput.readVInt(DataInput.java:125)
>   at 
> org.apache.lucene.codecs.lucene50.Lucene50SkipReader.readSkipData(Lucene50SkipReader.java:180)
>   at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:163)
>   at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:133)
>   at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader$BlockDocsEnum.advance(Lucene50PostingsReader.java:421)
>   at YCS_skip7$1.testSkip(YCS_skip7.java:307)
> {code}
> Typical stack trace on Lucene4.10.3:
> {code}
> 6-08-31 18:57:17,460 ERROR org.apache.solr.servlet.SolrDispatchFilter: 
> null:java.lang.ArrayIndexOutOfBoundsException: 75
>  at 
> org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:301)
>  at org.apache.lucene.store.DataInput.readVInt(DataInput.java:122)
>  at 
> org.apache.lucene.codecs.lucene41.Lucene41SkipReader.readSkipData(Lucene41SkipReader.java:194)
>  at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:168)
>  at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:138)
>  at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.advance(Lucene41PostingsReader.java:506)
>  at org.apache.lucene.search.TermScorer.advance(TermScorer.java:85)
> [...]
>  at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
> [...]
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2004)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9493) uniqueKey generation fails if content POSTed as "application/javabin" and uniqueKey field comes as NULL (as opposed to not coming at all).

2016-09-12 Thread Yury Kartsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Kartsev updated SOLR-9493:
---
Summary: uniqueKey generation fails if content POSTed as 
"application/javabin" and uniqueKey field comes as NULL (as opposed to not 
coming at all).  (was: uniqueKey generation fails if content POSTed as 
"application/javabin".)

> uniqueKey generation fails if content POSTed as "application/javabin" and 
> uniqueKey field comes as NULL (as opposed to not coming at all).
> --
>
> Key: SOLR-9493
> URL: https://issues.apache.org/jira/browse/SOLR-9493
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yury Kartsev
> Attachments: 200.png, 400.png, Screen Shot 2016-09-11 at 16.29.50 
> .png, SolrInputDoc_contents.png, SolrInputDoc_headers.png
>
>
> I have faced a weird issue when the same application code (using SolrJ) fails 
> indexing a document without a unique key (should be auto-generated by SOLR) 
> in SolrCloud and succeeds indexing it in standalone SOLR instance (or even in 
> cloud mode, but from web interface of one of the replicas). Difference is 
> obviously only between clients (CloudSolrClient vs HttpSolrClient) and SOLR 
> URLs (Zokeeper hostname+port vs standalone SOLR instance hostname and port). 
> Failure is seen as "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id".
> I am using SOLR 5.1. In cloud mode I have 1 shard and 3 replicas.
> After lot of debugging and investigation (see below as well as my 
> [StackOverflow 
> post|http://stackoverflow.com/questions/39401792/uniquekey-generation-does-not-work-in-solrcloud-but-works-if-standalone])
>  I came to a conclusion that the difference in failing and succeeding calls 
> is simply content type of the POSTing requests. Local proxy clearly shows 
> that the request fails if content is sent as "application/javabin" (see 
> attached screenshot with sensitive data removed) and succeeds if content sent 
> as "application/xml; charset=UTF-8"  (see attached screenshot with sensitive 
> data removed).
> Would you be able to please assist?
> Thank you very much in advance!
> 
> Copying whole description and investigation here as well:
> 
> [Documentation|https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements]
>  states:{quote}Schema defaults and copyFields cannot be used to populate the 
> uniqueKey field. You can use UUIDUpdateProcessorFactory to have uniqueKey 
> values generated automatically.{quote}
> Therefore I have added my uniqueKey field to the schema:{code} name="uuid" class="solr.UUIDField" indexed="true" />
> ...
> 
> ...
> id{code}Then I have added updateRequestProcessorChain 
> to my solrconfig:{code}
> 
> id
> 
> 
> {code}And made it the default for the 
> UpdateRequestHandler:{code}
>  
>   uuid
>  
> {code}
> Adding new documents with null/absent id works fine as from web-interface of 
> one of the replicas, as when using SOLR in standalone mode (non-cloud) from 
> my application. Although when only I'm using SolrCloud and add document from 
> my application (using CloudSolrClient from SolrJ) it fails with 
> "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id"
> All other operations like ping or search for documents work fine in either 
> mode (standalone or cloud).
> INVESTIGATION (i.e. more details):
> In standalone mode obviously update request is:{code}POST 
> standalone_host:port/solr/collection_name/update?wt=json{code}
> In SOLR cloud mode, when adding document from one replica's web interface, 
> update request is (found through inspecting the call made by web interface): 
> {code}POST 
> replica_host:port/solr/collection_name_shard1_replica_1/update?wt=json{code}
> In both these cases payload is something like:{code}{
> "add": {
> "doc": {
>  .
> },
> "boost": 1.0,
> "overwrite": true,
> "commitWithin": 1000
> }
> }{code}
> In case when CloudSolrClient is used, the following happens (found through 
> debugging):
> Using ZK and some logic, URL list of replicas is constructed that looks like 
> this:{code}[http://replica_1_host:port/solr/collection_name/,
>  http://replica_2_host:port/solr/collection_name/,
>  http://replica_3_host:port/solr/collection_name/]{code}
> This code is called:{code}LBHttpSolrClient.Req req = new 
> LBHttpSolrClient.Req(request, 

[jira] [Comment Edited] (SOLR-9493) uniqueKey generation fails if content POSTed as "application/javabin".

2016-09-12 Thread Yury Kartsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484983#comment-15484983
 ] 

Yury Kartsev edited comment on SOLR-9493 at 9/12/16 7:02 PM:
-

Yes! I think I've found the exact issue :) If only I add NULL field that is my 
uniqueKey, that error occurs even if I'm using 
SolrInputDocument:{code}doc.addField("id", null);{code}{code}Document is 
missing mandatory uniqueKey field: id{code}

I think this is definitely a bug in handling @Field value for uniqueKey that is 
coming as null. In this case it should be auto-generated instead of giving an 
error.


was (Author: jpro@gmail.com):
Yes! I think I've found the exact issue :) If only I add NULL field that is my 
uniqueKey, that error occurs:{code}doc.addField("id", 
null);{code}{code}Document is missing mandatory uniqueKey field: id{code}

I think this is definitely a bug in handling @Field value for uniqueKey that is 
coming as null. In this case it should be auto-generated instead of giving an 
error.

> uniqueKey generation fails if content POSTed as "application/javabin".
> --
>
> Key: SOLR-9493
> URL: https://issues.apache.org/jira/browse/SOLR-9493
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yury Kartsev
> Attachments: 200.png, 400.png, Screen Shot 2016-09-11 at 16.29.50 
> .png, SolrInputDoc_contents.png, SolrInputDoc_headers.png
>
>
> I have faced a weird issue when the same application code (using SolrJ) fails 
> indexing a document without a unique key (should be auto-generated by SOLR) 
> in SolrCloud and succeeds indexing it in standalone SOLR instance (or even in 
> cloud mode, but from web interface of one of the replicas). Difference is 
> obviously only between clients (CloudSolrClient vs HttpSolrClient) and SOLR 
> URLs (Zokeeper hostname+port vs standalone SOLR instance hostname and port). 
> Failure is seen as "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id".
> I am using SOLR 5.1. In cloud mode I have 1 shard and 3 replicas.
> After lot of debugging and investigation (see below as well as my 
> [StackOverflow 
> post|http://stackoverflow.com/questions/39401792/uniquekey-generation-does-not-work-in-solrcloud-but-works-if-standalone])
>  I came to a conclusion that the difference in failing and succeeding calls 
> is simply content type of the POSTing requests. Local proxy clearly shows 
> that the request fails if content is sent as "application/javabin" (see 
> attached screenshot with sensitive data removed) and succeeds if content sent 
> as "application/xml; charset=UTF-8"  (see attached screenshot with sensitive 
> data removed).
> Would you be able to please assist?
> Thank you very much in advance!
> 
> Copying whole description and investigation here as well:
> 
> [Documentation|https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements]
>  states:{quote}Schema defaults and copyFields cannot be used to populate the 
> uniqueKey field. You can use UUIDUpdateProcessorFactory to have uniqueKey 
> values generated automatically.{quote}
> Therefore I have added my uniqueKey field to the schema:{code} name="uuid" class="solr.UUIDField" indexed="true" />
> ...
> 
> ...
> id{code}Then I have added updateRequestProcessorChain 
> to my solrconfig:{code}
> 
> id
> 
> 
> {code}And made it the default for the 
> UpdateRequestHandler:{code}
>  
>   uuid
>  
> {code}
> Adding new documents with null/absent id works fine as from web-interface of 
> one of the replicas, as when using SOLR in standalone mode (non-cloud) from 
> my application. Although when only I'm using SolrCloud and add document from 
> my application (using CloudSolrClient from SolrJ) it fails with 
> "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id"
> All other operations like ping or search for documents work fine in either 
> mode (standalone or cloud).
> INVESTIGATION (i.e. more details):
> In standalone mode obviously update request is:{code}POST 
> standalone_host:port/solr/collection_name/update?wt=json{code}
> In SOLR cloud mode, when adding document from one replica's web interface, 
> update request is (found through inspecting the call made by web interface): 
> {code}POST 
> replica_host:port/solr/collection_name_shard1_replica_1/update?wt=json{code}
> In both these cases payload is something like:{code}{
> "add": {
> "doc": {
>  .
> },
> 

[jira] [Commented] (SOLR-9493) uniqueKey generation fails if content POSTed as "application/javabin".

2016-09-12 Thread Yury Kartsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484983#comment-15484983
 ] 

Yury Kartsev commented on SOLR-9493:


Yes! I think I've found the exact issue :) If only I add NULL field that is my 
uniqueKey, that error occurs:{code}doc.addField("id", 
null);{code}{code}Document is missing mandatory uniqueKey field: id{code}

I think this is definitely a bug in handling @Field value for uniqueKey that is 
coming as null. In this case it should be auto-generated instead of giving an 
error.

> uniqueKey generation fails if content POSTed as "application/javabin".
> --
>
> Key: SOLR-9493
> URL: https://issues.apache.org/jira/browse/SOLR-9493
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yury Kartsev
> Attachments: 200.png, 400.png, Screen Shot 2016-09-11 at 16.29.50 
> .png, SolrInputDoc_contents.png, SolrInputDoc_headers.png
>
>
> I have faced a weird issue when the same application code (using SolrJ) fails 
> indexing a document without a unique key (should be auto-generated by SOLR) 
> in SolrCloud and succeeds indexing it in standalone SOLR instance (or even in 
> cloud mode, but from web interface of one of the replicas). Difference is 
> obviously only between clients (CloudSolrClient vs HttpSolrClient) and SOLR 
> URLs (Zokeeper hostname+port vs standalone SOLR instance hostname and port). 
> Failure is seen as "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id".
> I am using SOLR 5.1. In cloud mode I have 1 shard and 3 replicas.
> After lot of debugging and investigation (see below as well as my 
> [StackOverflow 
> post|http://stackoverflow.com/questions/39401792/uniquekey-generation-does-not-work-in-solrcloud-but-works-if-standalone])
>  I came to a conclusion that the difference in failing and succeeding calls 
> is simply content type of the POSTing requests. Local proxy clearly shows 
> that the request fails if content is sent as "application/javabin" (see 
> attached screenshot with sensitive data removed) and succeeds if content sent 
> as "application/xml; charset=UTF-8"  (see attached screenshot with sensitive 
> data removed).
> Would you be able to please assist?
> Thank you very much in advance!
> 
> Copying whole description and investigation here as well:
> 
> [Documentation|https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements]
>  states:{quote}Schema defaults and copyFields cannot be used to populate the 
> uniqueKey field. You can use UUIDUpdateProcessorFactory to have uniqueKey 
> values generated automatically.{quote}
> Therefore I have added my uniqueKey field to the schema:{code} name="uuid" class="solr.UUIDField" indexed="true" />
> ...
> 
> ...
> id{code}Then I have added updateRequestProcessorChain 
> to my solrconfig:{code}
> 
> id
> 
> 
> {code}And made it the default for the 
> UpdateRequestHandler:{code}
>  
>   uuid
>  
> {code}
> Adding new documents with null/absent id works fine as from web-interface of 
> one of the replicas, as when using SOLR in standalone mode (non-cloud) from 
> my application. Although when only I'm using SolrCloud and add document from 
> my application (using CloudSolrClient from SolrJ) it fails with 
> "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id"
> All other operations like ping or search for documents work fine in either 
> mode (standalone or cloud).
> INVESTIGATION (i.e. more details):
> In standalone mode obviously update request is:{code}POST 
> standalone_host:port/solr/collection_name/update?wt=json{code}
> In SOLR cloud mode, when adding document from one replica's web interface, 
> update request is (found through inspecting the call made by web interface): 
> {code}POST 
> replica_host:port/solr/collection_name_shard1_replica_1/update?wt=json{code}
> In both these cases payload is something like:{code}{
> "add": {
> "doc": {
>  .
> },
> "boost": 1.0,
> "overwrite": true,
> "commitWithin": 1000
> }
> }{code}
> In case when CloudSolrClient is used, the following happens (found through 
> debugging):
> Using ZK and some logic, URL list of replicas is constructed that looks like 
> this:{code}[http://replica_1_host:port/solr/collection_name/,
>  http://replica_2_host:port/solr/collection_name/,
>  http://replica_3_host:port/solr/collection_name/]{code}
> This code is called:{code}LBHttpSolrClient.Req req = new 

[jira] [Updated] (SOLR-9493) uniqueKey generation fails if content POSTed as "application/javabin".

2016-09-12 Thread Yury Kartsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Kartsev updated SOLR-9493:
---
Attachment: SolrInputDoc_headers.png
SolrInputDoc_contents.png

Just added a quick code that converts my Serializable Solr entity with 
"org.apache.solr.client.solrj.beans.Field" annotations into a SolrInputDocument 
(with this map of field name -> value) and used solrClient.add. It worked just 
fine (as it worked for you). And uniqueKey was generated perfectly. 
{code}SolrInputDocument doc = new SolrInputDocument();
// few lines of doc.addField(FIELD_NAME, mySolrEntity.getFieldValue());
solrClient.add(doc);{code}

By the way, I have checked these requests in the proxy. See screenshots 
SolrInputDoc_contents and SolrInputDoc_headers. Headers are exactly the same, 
although contents differs (obviously it's a Map). I am now thinking what if the 
issue is that I'm passing uniqueKey as NULL instead of not passing it at all?

> uniqueKey generation fails if content POSTed as "application/javabin".
> --
>
> Key: SOLR-9493
> URL: https://issues.apache.org/jira/browse/SOLR-9493
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yury Kartsev
> Attachments: 200.png, 400.png, Screen Shot 2016-09-11 at 16.29.50 
> .png, SolrInputDoc_contents.png, SolrInputDoc_headers.png
>
>
> I have faced a weird issue when the same application code (using SolrJ) fails 
> indexing a document without a unique key (should be auto-generated by SOLR) 
> in SolrCloud and succeeds indexing it in standalone SOLR instance (or even in 
> cloud mode, but from web interface of one of the replicas). Difference is 
> obviously only between clients (CloudSolrClient vs HttpSolrClient) and SOLR 
> URLs (Zokeeper hostname+port vs standalone SOLR instance hostname and port). 
> Failure is seen as "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id".
> I am using SOLR 5.1. In cloud mode I have 1 shard and 3 replicas.
> After lot of debugging and investigation (see below as well as my 
> [StackOverflow 
> post|http://stackoverflow.com/questions/39401792/uniquekey-generation-does-not-work-in-solrcloud-but-works-if-standalone])
>  I came to a conclusion that the difference in failing and succeeding calls 
> is simply content type of the POSTing requests. Local proxy clearly shows 
> that the request fails if content is sent as "application/javabin" (see 
> attached screenshot with sensitive data removed) and succeeds if content sent 
> as "application/xml; charset=UTF-8"  (see attached screenshot with sensitive 
> data removed).
> Would you be able to please assist?
> Thank you very much in advance!
> 
> Copying whole description and investigation here as well:
> 
> [Documentation|https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements]
>  states:{quote}Schema defaults and copyFields cannot be used to populate the 
> uniqueKey field. You can use UUIDUpdateProcessorFactory to have uniqueKey 
> values generated automatically.{quote}
> Therefore I have added my uniqueKey field to the schema:{code} name="uuid" class="solr.UUIDField" indexed="true" />
> ...
> 
> ...
> id{code}Then I have added updateRequestProcessorChain 
> to my solrconfig:{code}
> 
> id
> 
> 
> {code}And made it the default for the 
> UpdateRequestHandler:{code}
>  
>   uuid
>  
> {code}
> Adding new documents with null/absent id works fine as from web-interface of 
> one of the replicas, as when using SOLR in standalone mode (non-cloud) from 
> my application. Although when only I'm using SolrCloud and add document from 
> my application (using CloudSolrClient from SolrJ) it fails with 
> "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id"
> All other operations like ping or search for documents work fine in either 
> mode (standalone or cloud).
> INVESTIGATION (i.e. more details):
> In standalone mode obviously update request is:{code}POST 
> standalone_host:port/solr/collection_name/update?wt=json{code}
> In SOLR cloud mode, when adding document from one replica's web interface, 
> update request is (found through inspecting the call made by web interface): 
> {code}POST 
> replica_host:port/solr/collection_name_shard1_replica_1/update?wt=json{code}
> In both these cases payload is something like:{code}{
> "add": {
> "doc": {
>  .
> },
> "boost": 1.0,
> "overwrite": true,
> 

[jira] [Commented] (SOLR-8757) Swap + unload does not work

2016-09-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484932#comment-15484932
 ] 

Shawn Heisey commented on SOLR-8757:


You don't need an extension to do this.  The CoreAdmin can already do it 
without custom code.

Just ask CoreAdmin to swap the two cores, then unload the "old" one.

I personally just keep two cores for every index -- a build core and a live 
core.  I swap them as needed with the HTTP API, and never unload either one.  
The directory names for these cores do not include "live" or "build" ... for 
the cores named s0build and s0live, the directories are s0_0 and s0_1.


> Swap + unload does not work
> ---
>
> Key: SOLR-8757
> URL: https://issues.apache.org/jira/browse/SOLR-8757
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 5.5
>Reporter: Fabrizio Fortino
>
> I have created a Solr CoreAdminHandler extension with the goal to swap two 
> cores and remove the old one.
> My code looks like this:
> SolrCore core = coreContainer.create("newcore", coreProps)
> coreContainer.swap("newcore", "livecore")
> // the old livecore is now newcore, so unload it and remove all the related 
> dirs
> coreContainer.unload("newcore", true, true, true)
> After the last statement get executed the Solr log starts printing the 
> following messages forever
> 61424 INFO (pool-1-thread-1) [ x:newcore] o.a.s.c.SolrCore Core newcore is 
> not yet closed, waiting 100 ms before checking again.
> I tried to call the close() method on the SolrCore instance before and after 
> the unload but the result is the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7398) Nested Span Queries are buggy

2016-09-12 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484910#comment-15484910
 ] 

Paul Elschot commented on LUCENE-7398:
--

As to missing matches due to lazy iteration, I'd prefer to add an option to 
allow choice between current behaviour, the above patch (because I think it is 
slightly better than previous 4.10 behaviour), one that misses no matches, and 
perhaps more.
For example, would anyone like a SpanWindowQuery that only uses span start 
positions? That would at least allow an easy complete implementation.
And we need to document the current ordered - no overlap, and non ordered - 
overlap behaviour.

To improve scoring consistency, we could start by requiring that span near 
queries score the same as phrases.
There is a problem for nested span queries in that current similarities have a 
tf component over a complete document field, and this tf does not play well 
with the sloppy frequency for SpanNear over SpanOr. I'd like each term 
occurrence of a SpanTerm to contribute the same (idf like) weight to a 
SpanNear, but that can currently not be done because the spans of a SpanOr does 
not have a weight. So when mixing terms with SpanOr it will be hard to get the 
same scoring as a boolean Or over PhraseQueries. I don't know how to resolve 
this, we may have to add something to the similarities for this.
SpanBoostQuery would only make sense when the individual Spans occrurences can 
carry a weight.
I'd prefer span scoring consistency to have its own jira issue(s).



> Nested Span Queries are buggy
> -
>
> Key: LUCENE-7398
> URL: https://issues.apache.org/jira/browse/LUCENE-7398
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5, 6.x
>Reporter: Christoph Goller
>Assignee: Alan Woodward
>Priority: Critical
> Attachments: LUCENE-7398-20160814.patch, LUCENE-7398.patch, 
> LUCENE-7398.patch, TestSpanCollection.java
>
>
> Example for a nested SpanQuery that is not working:
> Document: Human Genome Organization , HUGO , is trying to coordinate gene 
> mapping research worldwide.
> Query: spanNear([body:coordinate, spanOr([spanNear([body:gene, body:mapping], 
> 0, true), body:gene]), body:research], 0, true)
> The query should match "coordinate gene mapping research" as well as 
> "coordinate gene research". It does not match  "coordinate gene mapping 
> research" with Lucene 5.5 or 6.1, it did however match with Lucene 4.10.4. It 
> probably stopped working with the changes on SpanQueries in 5.3. I will 
> attach a unit test that shows the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-09-12 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484908#comment-15484908
 ] 

Anshum Gupta commented on SOLR-8097:


Here's the discussion about that: 
https://issues.apache.org/jira/browse/SOLR-8097?focusedCommentId=15227338=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15227338

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7318) Graduate StandardAnalyzer out of analyzers module into core

2016-09-12 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-7318.
---
Resolution: Fixed

I committed to 6.x and 6.2 (including addition of deprecated classes). I also 
forward-ported the LowercaseFilter and StopFilter changes to master, so 
Javadocs of analysis/common module are consistent.

> Graduate StandardAnalyzer out of analyzers module into core
> ---
>
> Key: LUCENE-7318
> URL: https://issues.apache.org/jira/browse/LUCENE-7318
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: master (7.0), 6.2.1, 6.2
>
> Attachments: LUCENE-7318-backwards.patch, 
> LUCENE-7318-backwards.patch, LUCENE-7318-backwards.patch, 
> LUCENE-7318-backwards.patch, LUCENE-7318.patch
>
>
> Spinoff from LUCENE-7314:
> {{StandardAnalyzer}} has progressed substantially since we broke out the 
> analyzers module ... it now follows a real Unicode standard (UAX #29 Unicode 
> Text Segmentation).  It's also much faster than it used to be, since it 
> switched to JFlex a while back.  Many bug fixes, etc.
> I think it would make a good default for most Lucene users, and we should 
> graduate it from the analyzers module into core, and make it the default for 
> {{IndexWriter}}.
> It's really quite crazy that users must go digging in the analyzers module to 
> get started with Lucene ... we don't make them dig through the codecs module 
> to find a good default codec ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7318) Graduate StandardAnalyzer out of analyzers module into core

2016-09-12 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-7318:
--
Attachment: LUCENE-7318-backwards.patch

Committed patch.

> Graduate StandardAnalyzer out of analyzers module into core
> ---
>
> Key: LUCENE-7318
> URL: https://issues.apache.org/jira/browse/LUCENE-7318
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: master (7.0), 6.2, 6.2.1
>
> Attachments: LUCENE-7318-backwards.patch, 
> LUCENE-7318-backwards.patch, LUCENE-7318-backwards.patch, 
> LUCENE-7318-backwards.patch, LUCENE-7318.patch
>
>
> Spinoff from LUCENE-7314:
> {{StandardAnalyzer}} has progressed substantially since we broke out the 
> analyzers module ... it now follows a real Unicode standard (UAX #29 Unicode 
> Text Segmentation).  It's also much faster than it used to be, since it 
> switched to JFlex a while back.  Many bug fixes, etc.
> I think it would make a good default for most Lucene users, and we should 
> graduate it from the analyzers module into core, and make it the default for 
> {{IndexWriter}}.
> It's really quite crazy that users must go digging in the analyzers module to 
> get started with Lucene ... we don't make them dig through the codecs module 
> to find a good default codec ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9493) uniqueKey generation fails if content POSTed as "application/javabin".

2016-09-12 Thread Yury Kartsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484855#comment-15484855
 ] 

Yury Kartsev edited comment on SOLR-9493 at 9/12/16 6:20 PM:
-

Thanks for your time. I've retried with TRACE log level with the following 
code: {code}CloudSolrClient cloudSolrClient = new 
CloudSolrClient(getSolrServerURL());
cloudSolrClient.setZkClientTimeout(getReadTimeout());
cloudSolrClient.setZkConnectTimeout(getConnectionTimeout());
cloudSolrClient.setDefaultCollection(getCollectionName());
// setting basic authentication in HTTP client
DefaultHttpClient httpClient = (DefaultHttpClient) 
cloudSolrClient.getLbClient().getHttpClient();
HttpClientUtil.setBasicAuth(httpClient, authUserName, 
authPassword);
// setting preemptive authentication in HTTP client to prevent 
"NonRepeatableRequestException"

httpClient.addRequestInterceptor(getPreemptiveBasicAuthInterceptor(authUserName,
 authPassword));

 solrClient.addBeans(beans); // called from different class 
(beans is Collection of my Serializable Solr entities with 
"org.apache.solr.client.solrj.beans.Field" annotations){code}
Yes, I am using basic authentication because my SOLR instances are secured with 
that. Also I'm not using multiple entries in this particular example. Well, 
yes, I'm passing Collection, but it consists of only one element.

The log looks like this:{code}2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) 
[c:xxx-collection s:shard1 r:core_node4 x:xxx-collection] 
o.e.j.s.HttpConnection 
HttpConnection@49800692[SelectChannelEndPoint@1af89b5d{/10.100.210.241:51788<->8983,Open,in,out,-,-,1/5,HttpConnection}{io=0/0,kio=0,kro=1}][p=HttpParser{s=CHUNKED_CONTENT,0
 of 
-1},g=HttpGenerator@177c5e8c{s=START},c=HttpChannelOverHttp@1313573b{r=7,c=false,a=IDLE,uri=//10.100.210.241:8983/solr/xxx-collection/update?wt=javabin=2}]
 parsed true HttpParser{s=CHUNKED_CONTENT,0 of -1}
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.HttpChannel 
HttpChannelOverHttp@1313573b{r=7,c=false,a=IDLE,uri=//10.100.210.241:8983/solr/xxx-collection/update?wt=javabin=2}
 handle //10.100.210.241:8983/solr/xxx-collection/update?wt=javabin=2 
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.HttpChannelState 
HttpChannelState@2bb68922{s=IDLE a=null i=true r=!P!U w=false} handling IDLE
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.HttpChannel 
HttpChannelOverHttp@1313573b{r=7,c=false,a=DISPATCHED,uri=//10.100.210.241:8983/solr/xxx-collection/update?wt=javabin=2}
 action DISPATCH
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.Server REQUEST on 
HttpChannelOverHttp@1313573b{r=7,c=false,a=DISPATCHED,uri=//10.100.210.241:8983/solr/xxx-collection/update?wt=javabin=2}
POST /solr/xxx-collection/update HTTP/1.1
User-Agent: Solr[org.apache.solr.client.solrj.impl.HttpSolrClient] 1.0
Transfer-Encoding: chunked
Content-Type: application/javabin
Host: 10.100.210.241:8983
Authorization: Basic *
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.h.ContextHandler scope 
null||/solr/xxx-collection/update @ 
o.e.j.w.WebAppContext@2ac273d3{/solr,file:///Users/yury/Library/Solr/solr-6.2.0/server/solr-webapp/webapp/,AVAILABLE}{/Users/yury/Library/Solr/solr-6.2.0/server/solr-webapp/webapp}
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.h.ContextHandler 
context=/solr||/xxx-collection/update @ 
o.e.j.w.WebAppContext@2ac273d3{/solr,file:///Users/yury/Library/Solr/solr-6.2.0/server/solr-webapp/webapp/,AVAILABLE}{/Users/yury/Library/Solr/solr-6.2.0/server/solr-webapp/webapp}
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.session 
sessionManager=org.eclipse.jetty.server.session.HashSessionManager@33723e30
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.session session=null
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.ServletHandler servlet 
/solr|/xxx-collection/update|null -> 
default@5c13d641==org.eclipse.jetty.servlet.DefaultServlet,0,true
2016-09-12 17:57:46.842 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.ServletHandler 
chain=SolrRequestFilter->default@5c13d641==org.eclipse.jetty.servlet.DefaultServlet,0,true
2016-09-12 17:57:46.842 DEBUG (qtp1989972246-80) 

[jira] [Commented] (LUCENE-7318) Graduate StandardAnalyzer out of analyzers module into core

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484860#comment-15484860
 ] 

ASF subversion and git services commented on LUCENE-7318:
-

Commit b39fcc12023b978c2d93a9446596729ca0e0e464 in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b39fcc1 ]

LUCENE-7318: Forward port some changes (add StopFilter and LowercaseFilter at 
their original location)


> Graduate StandardAnalyzer out of analyzers module into core
> ---
>
> Key: LUCENE-7318
> URL: https://issues.apache.org/jira/browse/LUCENE-7318
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: master (7.0), 6.2, 6.2.1
>
> Attachments: LUCENE-7318-backwards.patch, 
> LUCENE-7318-backwards.patch, LUCENE-7318-backwards.patch, LUCENE-7318.patch
>
>
> Spinoff from LUCENE-7314:
> {{StandardAnalyzer}} has progressed substantially since we broke out the 
> analyzers module ... it now follows a real Unicode standard (UAX #29 Unicode 
> Text Segmentation).  It's also much faster than it used to be, since it 
> switched to JFlex a while back.  Many bug fixes, etc.
> I think it would make a good default for most Lucene users, and we should 
> graduate it from the analyzers module into core, and make it the default for 
> {{IndexWriter}}.
> It's really quite crazy that users must go digging in the analyzers module to 
> get started with Lucene ... we don't make them dig through the codecs module 
> to find a good default codec ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9493) uniqueKey generation fails if content POSTed as "application/javabin".

2016-09-12 Thread Yury Kartsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484855#comment-15484855
 ] 

Yury Kartsev commented on SOLR-9493:


Thanks for your time. I've retried with TRACE log level with the following 
code: {code}CloudSolrClient cloudSolrClient = new 
CloudSolrClient(getSolrServerURL());
cloudSolrClient.setZkClientTimeout(getReadTimeout());
cloudSolrClient.setZkConnectTimeout(getConnectionTimeout());
cloudSolrClient.setDefaultCollection(getCollectionName());
// setting basic authentication in HTTP client
DefaultHttpClient httpClient = (DefaultHttpClient) 
cloudSolrClient.getLbClient().getHttpClient();
HttpClientUtil.setBasicAuth(httpClient, authUserName, 
authPassword);
// setting preemptive authentication in HTTP client to prevent 
"NonRepeatableRequestException"

httpClient.addRequestInterceptor(getPreemptiveBasicAuthInterceptor(authUserName,
 authPassword));

 solrClient.addBeans(beans); // called from different class 
(beans is Collection of my Serializable Solr entities with 
"org.apache.solr.client.solrj.beans.Field" annotations){code}
Yes, I am using basic authentication because my SOLR instances are secured with 
that. Also I'm not using multiple entries in this particular example. Well, 
yes, I'm passing Collection, but it consists of only one element.

The log looks like this:{code}2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) 
[c:xxx-collection s:shard1 r:core_node4 x:xxx-collection] 
o.e.j.s.HttpConnection 
HttpConnection@49800692[SelectChannelEndPoint@1af89b5d{/10.100.210.241:51788<->8983,Open,in,out,-,-,1/5,HttpConnection}{io=0/0,kio=0,kro=1}][p=HttpParser{s=CHUNKED_CONTENT,0
 of 
-1},g=HttpGenerator@177c5e8c{s=START},c=HttpChannelOverHttp@1313573b{r=7,c=false,a=IDLE,uri=//10.100.210.241:8983/solr/xxx-collection/update?wt=javabin=2}]
 parsed true HttpParser{s=CHUNKED_CONTENT,0 of -1}
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.HttpChannel 
HttpChannelOverHttp@1313573b{r=7,c=false,a=IDLE,uri=//10.100.210.241:8983/solr/xxx-collection/update?wt=javabin=2}
 handle //10.100.210.241:8983/solr/xxx-collection/update?wt=javabin=2 
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.HttpChannelState 
HttpChannelState@2bb68922{s=IDLE a=null i=true r=!P!U w=false} handling IDLE
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.HttpChannel 
HttpChannelOverHttp@1313573b{r=7,c=false,a=DISPATCHED,uri=//10.100.210.241:8983/solr/xxx-collection/update?wt=javabin=2}
 action DISPATCH
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.Server REQUEST on 
HttpChannelOverHttp@1313573b{r=7,c=false,a=DISPATCHED,uri=//10.100.210.241:8983/solr/xxx-collection/update?wt=javabin=2}
POST /solr/xxx-collection/update HTTP/1.1
User-Agent: Solr[org.apache.solr.client.solrj.impl.HttpSolrClient] 1.0
Transfer-Encoding: chunked
Content-Type: application/javabin
Host: 10.100.210.241:8983
Authorization: Basic *
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.h.ContextHandler scope 
null||/solr/xxx-collection/update @ 
o.e.j.w.WebAppContext@2ac273d3{/solr,file:///Users/yury/Library/Solr/solr-6.2.0/server/solr-webapp/webapp/,AVAILABLE}{/Users/yury/Library/Solr/solr-6.2.0/server/solr-webapp/webapp}
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.h.ContextHandler 
context=/solr||/xxx-collection/update @ 
o.e.j.w.WebAppContext@2ac273d3{/solr,file:///Users/yury/Library/Solr/solr-6.2.0/server/solr-webapp/webapp/,AVAILABLE}{/Users/yury/Library/Solr/solr-6.2.0/server/solr-webapp/webapp}
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.session 
sessionManager=org.eclipse.jetty.server.session.HashSessionManager@33723e30
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.session session=null
2016-09-12 17:57:46.841 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.ServletHandler servlet 
/solr|/xxx-collection/update|null -> 
default@5c13d641==org.eclipse.jetty.servlet.DefaultServlet,0,true
2016-09-12 17:57:46.842 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 x:xxx-collection] o.e.j.s.ServletHandler 
chain=SolrRequestFilter->default@5c13d641==org.eclipse.jetty.servlet.DefaultServlet,0,true
2016-09-12 17:57:46.842 DEBUG (qtp1989972246-80) [c:xxx-collection s:shard1 
r:core_node4 

[jira] [Commented] (SOLR-9470) Deadlocked threads in recovery

2016-09-12 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484798#comment-15484798
 ] 

Michael Braun commented on SOLR-9470:
-

[~dsmiley] you're right - will dig deeper and figure out where it's actually 
being acquired.

> Deadlocked threads in recovery
> --
>
> Key: SOLR-9470
> URL: https://issues.apache.org/jira/browse/SOLR-9470
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
>Reporter: Michael Braun
> Attachments: solr-deadlock.txt
>
>
> Background: Booted up a cluster and replicas were in recovery. All replicas 
> recovered minus one, and it was hanging on HTTP requests. Issued shutdown and 
> solr would not shut down. Examined with JStack and found a deadlock had 
> occurred. The relevant thread information is attached. Some information has 
> been redacted as well (some custom URPs, IPs) from the stack traces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.7.0_80) - Build # 429 - Unstable!

2016-09-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/429/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:40603/solr/testschemaapi_shard1_replica2: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:40603/solr/testschemaapi_shard1_replica2: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([E7427EDE8206C7A8:6F1641042CFAAA50]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:653)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1002)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:891)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:827)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:101)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (LUCENE-7318) Graduate StandardAnalyzer out of analyzers module into core

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484773#comment-15484773
 ] 

ASF subversion and git services commented on LUCENE-7318:
-

Commit 5b3e6deb2f19e917792c1b8f4909b9c28b2e7508 in lucene-solr's branch 
refs/heads/branch_6_2 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5b3e6de ]

LUCENE-7318: Fix backwards compatibility issues around StandardAnalyzer and its 
components, introduced with Lucene 6.2.0. The moved classes were restored in 
their original packages: LowercaseFilter and StopFilter, as well as several 
utility classes

# Conflicts:
#   lucene/CHANGES.txt


> Graduate StandardAnalyzer out of analyzers module into core
> ---
>
> Key: LUCENE-7318
> URL: https://issues.apache.org/jira/browse/LUCENE-7318
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: master (7.0), 6.2, 6.2.1
>
> Attachments: LUCENE-7318-backwards.patch, 
> LUCENE-7318-backwards.patch, LUCENE-7318-backwards.patch, LUCENE-7318.patch
>
>
> Spinoff from LUCENE-7314:
> {{StandardAnalyzer}} has progressed substantially since we broke out the 
> analyzers module ... it now follows a real Unicode standard (UAX #29 Unicode 
> Text Segmentation).  It's also much faster than it used to be, since it 
> switched to JFlex a while back.  Many bug fixes, etc.
> I think it would make a good default for most Lucene users, and we should 
> graduate it from the analyzers module into core, and make it the default for 
> {{IndexWriter}}.
> It's really quite crazy that users must go digging in the analyzers module to 
> get started with Lucene ... we don't make them dig through the codecs module 
> to find a good default codec ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7318) Graduate StandardAnalyzer out of analyzers module into core

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484766#comment-15484766
 ] 

ASF subversion and git services commented on LUCENE-7318:
-

Commit 89f03655e386097142b59126e75c89a946f4 in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=89f0365 ]

LUCENE-7318: Fix backwards compatibility issues around StandardAnalyzer and its 
components, introduced with Lucene 6.2.0. The moved classes were restored in 
their original packages: LowercaseFilter and StopFilter, as well as several 
utility classes


> Graduate StandardAnalyzer out of analyzers module into core
> ---
>
> Key: LUCENE-7318
> URL: https://issues.apache.org/jira/browse/LUCENE-7318
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: master (7.0), 6.2, 6.2.1
>
> Attachments: LUCENE-7318-backwards.patch, 
> LUCENE-7318-backwards.patch, LUCENE-7318-backwards.patch, LUCENE-7318.patch
>
>
> Spinoff from LUCENE-7314:
> {{StandardAnalyzer}} has progressed substantially since we broke out the 
> analyzers module ... it now follows a real Unicode standard (UAX #29 Unicode 
> Text Segmentation).  It's also much faster than it used to be, since it 
> switched to JFlex a while back.  Many bug fixes, etc.
> I think it would make a good default for most Lucene users, and we should 
> graduate it from the analyzers module into core, and make it the default for 
> {{IndexWriter}}.
> It's really quite crazy that users must go digging in the analyzers module to 
> get started with Lucene ... we don't make them dig through the codecs module 
> to find a good default codec ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-09-12 Thread Perrin Bignoli (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1548#comment-1548
 ] 

Perrin Bignoli edited comment on SOLR-8097 at 9/12/16 5:04 PM:
---

Why is the the visibility of the following constructor in CloudSolrClient:

 private CloudSolrClient(Collection zkHosts, String chroot, 
HttpClient httpClient, LBHttpSolrClient lbSolrClient,
   boolean updatesToLeaders, boolean 
directUpdatesToLeadersOnly)

set to private and not protected?

There are also a number of private variables in CloudSolrClient that make 
subclassing difficult.  I am not familiar enough with the source code to make 
an exhaustive list.


was (Author: perrin.bignoli):
Why is the the visibility of the following constructor in CloudSolrClient:

 private CloudSolrClient(Collection zkHosts, String chroot, 
HttpClient httpClient, LBHttpSolrClient lbSolrClient,
   boolean updatesToLeaders, boolean 
directUpdatesToLeadersOnly)

set to private and not protected?

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7440) Document skipping on large indexes is broken

2016-09-12 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484634#comment-15484634
 ] 

Mike Drob commented on LUCENE-7440:
---

bq. Regarding the 1.8B docs number... at least in my tests I saw the top-level 
skip distance of ~268M w/ the default codec. Subtracting this from MAX_INT 
gives around 1.8B, which is around the number I saw prior to the overflow. To 
hit the bug, one also needs to be doing large skips toward the end of the index 
as well, in order to use the top level(s) of the multi-level skip list. Having 
a conjunction query of a highly unique term (or clause) in conjunction with a 
common term has a good chance of triggering (example: +timestamp:39520928456494 
+doctype:common)
Would this be faster to test if we configure a larger top-level skip distance? 
i.e. set up a skip distance of ~1B and then we'd only need to get to ~1.1B docs 
indexed (40% fewer docs, theoretically 40% faster?) or even set up a skip 
distance of ~2B to only need to index very few documents?

Maybe this idea should be split into a separate issue to focus on improving the 
test?

> Document skipping on large indexes is broken
> 
>
> Key: LUCENE-7440
> URL: https://issues.apache.org/jira/browse/LUCENE-7440
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 2.2
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Critical
> Fix For: master (7.0), 6.3, 6.2.1
>
> Attachments: LUCENE-7440.patch, LUCENE-7440.patch
>
>
> Large skips on large indexes fail.
> Anything that uses skips (such as a boolean query, filtered queries, faceted 
> queries, join queries, etc) can trigger this bug on a sufficiently large 
> index.
> The bug is a numeric overflow in MultiLevelSkipList that has been present 
> since inception (Lucene 2.2).  It may not manifest until one has a single 
> segment with more than ~1.8B documents, and a large skip is performed on that 
> segment.
> Typical stack trace on Lucene7-dev:
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 110
>   at 
> org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:297)
>   at org.apache.lucene.store.DataInput.readVInt(DataInput.java:125)
>   at 
> org.apache.lucene.codecs.lucene50.Lucene50SkipReader.readSkipData(Lucene50SkipReader.java:180)
>   at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:163)
>   at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:133)
>   at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader$BlockDocsEnum.advance(Lucene50PostingsReader.java:421)
>   at YCS_skip7$1.testSkip(YCS_skip7.java:307)
> {code}
> Typical stack trace on Lucene4.10.3:
> {code}
> 6-08-31 18:57:17,460 ERROR org.apache.solr.servlet.SolrDispatchFilter: 
> null:java.lang.ArrayIndexOutOfBoundsException: 75
>  at 
> org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:301)
>  at org.apache.lucene.store.DataInput.readVInt(DataInput.java:122)
>  at 
> org.apache.lucene.codecs.lucene41.Lucene41SkipReader.readSkipData(Lucene41SkipReader.java:194)
>  at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:168)
>  at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:138)
>  at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.advance(Lucene41PostingsReader.java:506)
>  at org.apache.lucene.search.TermScorer.advance(TermScorer.java:85)
> [...]
>  at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
> [...]
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2004)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9493) uniqueKey generation fails if content POSTed as "application/javabin".

2016-09-12 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484595#comment-15484595
 ] 

Alexandre Rafalovitch commented on SOLR-9493:
-

I am not able to reproduce this using the following basic code against a single 
node single shard cloud example :
{noformat}
String zkHostString = "localhost:9983";
CloudSolrClient solr = new 
CloudSolrClient.Builder().withZkHost(zkHostString).build();
solr.setDefaultCollection("gettingstarted");

SolrInputDocument doc = new SolrInputDocument();
doc.addField("fielda", "valuec");
doc.addField("fieldb", "valued");

solr.add(doc);
solr.commit();
solr.close();
{noformat}

If I enable full TRACEing (literally setting root to TRACE in the Admin UI 
under Logging/Level, I see my javabin request coming in in the *solr.log* 
(file, not Admin UI which has INFO level limit). 

However, my requests seems to have different headers from yours. I get the 
following:
{noformat}
DEBUG - 2016-09-12 16:31:37.644; [   ] org.eclipse.jetty.server.Server; REQUEST 
on 
HttpChannelOverHttp@43c0621b{r=1,c=false,a=DISPATCHED,uri=//192.168.50.128:8983/solr/gettingstarted/update?wt=javabin=2}
POST /solr/gettingstarted/update HTTP/1.1
User-Agent: Solr[org.apache.solr.client.solrj.impl.HttpSolrClient] 1.0
Content-Length: 70
Content-Type: application/javabin
Host: 192.168.50.128:8983
Connection: keep-alive
{noformat}

Yours seems to be chunking (multiple entries? try just one) and having 
authorization:basic flag (are you doing anything with that?).

Later in the log I see:
{noformat}
DEBUG - 2016-09-12 16:31:37.664; [c:gettingstarted s:shard1 r:core_node1 
x:gettingstarted_shard1_replica1] 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor; 
PRE_UPDATE add{,id=da8f101d-b4ac-44c1-932e-1b8c03852c6b} 
{update.chain=add-unknown-fields-to-the-schema=_text_=javabin=2}
{noformat}

Showing that the chain has triggered and the id has been assigned. Are you 
seeing anything similar to that?

> uniqueKey generation fails if content POSTed as "application/javabin".
> --
>
> Key: SOLR-9493
> URL: https://issues.apache.org/jira/browse/SOLR-9493
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yury Kartsev
> Attachments: 200.png, 400.png, Screen Shot 2016-09-11 at 16.29.50 .png
>
>
> I have faced a weird issue when the same application code (using SolrJ) fails 
> indexing a document without a unique key (should be auto-generated by SOLR) 
> in SolrCloud and succeeds indexing it in standalone SOLR instance (or even in 
> cloud mode, but from web interface of one of the replicas). Difference is 
> obviously only between clients (CloudSolrClient vs HttpSolrClient) and SOLR 
> URLs (Zokeeper hostname+port vs standalone SOLR instance hostname and port). 
> Failure is seen as "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id".
> I am using SOLR 5.1. In cloud mode I have 1 shard and 3 replicas.
> After lot of debugging and investigation (see below as well as my 
> [StackOverflow 
> post|http://stackoverflow.com/questions/39401792/uniquekey-generation-does-not-work-in-solrcloud-but-works-if-standalone])
>  I came to a conclusion that the difference in failing and succeeding calls 
> is simply content type of the POSTing requests. Local proxy clearly shows 
> that the request fails if content is sent as "application/javabin" (see 
> attached screenshot with sensitive data removed) and succeeds if content sent 
> as "application/xml; charset=UTF-8"  (see attached screenshot with sensitive 
> data removed).
> Would you be able to please assist?
> Thank you very much in advance!
> 
> Copying whole description and investigation here as well:
> 
> [Documentation|https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements]
>  states:{quote}Schema defaults and copyFields cannot be used to populate the 
> uniqueKey field. You can use UUIDUpdateProcessorFactory to have uniqueKey 
> values generated automatically.{quote}
> Therefore I have added my uniqueKey field to the schema:{code} name="uuid" class="solr.UUIDField" indexed="true" />
> ...
> 
> ...
> id{code}Then I have added updateRequestProcessorChain 
> to my solrconfig:{code}
> 
> id
> 
> 
> {code}And made it the default for the 
> UpdateRequestHandler:{code}
>  
>   uuid
>  
> {code}
> Adding new documents with null/absent id works fine as from web-interface of 
> one of the replicas, as when using SOLR in standalone mode (non-cloud) from 
> my application. 

[jira] [Updated] (LUCENE-7425) poll-mirrors.pl requires additional perl packages?

2016-09-12 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7425:
---
Attachment: LUCENE-7425-add-path-and-details-options.patch

Patch adding -path and -details options.

Two other cosmetic changes:
* date/time stamp printing now excludes milliseconds
* seconds to wait between polling intervals now rounded to the nearest second

> poll-mirrors.pl requires additional perl packages?
> --
>
> Key: LUCENE-7425
> URL: https://issues.apache.org/jira/browse/LUCENE-7425
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7425-add-path-and-details-options.patch, 
> LUCENE-7425.patch
>
>
> I have a newish Ubuntu 16.04.1 install ... and I'm doing the Lucene/Solr 
> 6.2.0 release on it.
> Our release process is already hard enough.
> When I get to the step to poll the mirrors to see whether Maven central and 
> the apache mirrors have the release bits yet, I hit this:
> {noformat}
> 14:51 $ perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> Can't locate LWP/UserAgent.pm in @INC (you may need to install the 
> LWP::UserAgent module) (@INC contains: /etc/perl 
> /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1 
> /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 
> /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 
> /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at 
> ../dev-tools/scripts/poll-mirrors.pl line 31.
> BEGIN failed--compilation aborted at ../dev-tools/scripts/poll-mirrors.pl 
> line 31.
> {noformat}
> How can it be that such a trivial script would need optional perl packages 
> installed?  It seems all it's trying to do is download stuff over HTTP at 
> this point?
> So I fire up {{cpan}}, asking it to install {{LWP/UserAgent.pm}} and it hits 
> all sorts of errors that I cannot understand.
> Can we somehow simplify this script to use mere mortal perl packages?  Or is 
> something badly wrong with my Ubuntu install?  Maybe we should rewrite this 
> in a proper scripting language that has batteries included and also starts 
> with P ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-7425) poll-mirrors.pl requires additional perl packages?

2016-09-12 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reopened LUCENE-7425:


[~hossman]'s improvements from LUCENE-5104 were not ported to python.  

Reopening to port the -details and -path capabilities to the python version.

> poll-mirrors.pl requires additional perl packages?
> --
>
> Key: LUCENE-7425
> URL: https://issues.apache.org/jira/browse/LUCENE-7425
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7425.patch
>
>
> I have a newish Ubuntu 16.04.1 install ... and I'm doing the Lucene/Solr 
> 6.2.0 release on it.
> Our release process is already hard enough.
> When I get to the step to poll the mirrors to see whether Maven central and 
> the apache mirrors have the release bits yet, I hit this:
> {noformat}
> 14:51 $ perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> perl ../dev-tools/scripts/poll-mirrors.pl -version 6.2.0
> Can't locate LWP/UserAgent.pm in @INC (you may need to install the 
> LWP::UserAgent module) (@INC contains: /etc/perl 
> /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1 
> /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 
> /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 
> /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at 
> ../dev-tools/scripts/poll-mirrors.pl line 31.
> BEGIN failed--compilation aborted at ../dev-tools/scripts/poll-mirrors.pl 
> line 31.
> {noformat}
> How can it be that such a trivial script would need optional perl packages 
> installed?  It seems all it's trying to do is download stuff over HTTP at 
> this point?
> So I fire up {{cpan}}, asking it to install {{LWP/UserAgent.pm}} and it hits 
> all sorts of errors that I cannot understand.
> Can we somehow simplify this script to use mere mortal perl packages?  Or is 
> something badly wrong with my Ubuntu install?  Maybe we should rewrite this 
> in a proper scripting language that has batteries included and also starts 
> with P ;)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9408) Add solr commit data in TreeMergeRecordWriter

2016-09-12 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-9408:

Attachment: SOLR-9408.patch

Attaching updated patch against master. I'll commit this soon

> Add solr commit data in TreeMergeRecordWriter
> -
>
> Key: SOLR-9408
> URL: https://issues.apache.org/jira/browse/SOLR-9408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - MapReduce
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: mapreduce, solrcloud
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9408.patch, SOLR-9408.patch, SOLR-9408.patch
>
>
> The lucene index produced by TreeMergeRecordWriter when the segments are 
> merged doesn't contain Solr's commit data, specifically, commitTimeMsec.
> This means that when this index is subsequently loaded into SolrCloud and if 
> the index stays unchanged so no newer commits occurs, ADDREPLICA will appear 
> to succeed but will not actually do any full sync due to SOLR-9369, resulting 
> in adding an empty index as a replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9493) uniqueKey generation fails if content POSTed as "application/javabin".

2016-09-12 Thread Yury Kartsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15482373#comment-15482373
 ] 

Yury Kartsev edited comment on SOLR-9493 at 9/12/16 4:01 PM:
-

[~arafalov] I have spent some time and tried version 6.2.

Version 6.2 gives the same error, although in both cases now (while using 
SolrJ). What I mean by that is that both CloudSolrClient and HttpSolrClient end 
up sending payload as "application/javabin" now (still through the same place 
of HttpSolrClient, i.e. {code}final HttpResponse response = 
httpClient.execute(method);{code} In version 5.1 HttpSolrClient (when not in 
cloud mode) was sending payload as "application/xml; charset=UTF-8" and that 
worked (generated uniqueKey) - see above.

Case with payload sent as JSON (or XML) still works fine and generates 
uniqueKey without any issues. I ran it from SOLR web interface (Collection -> 
Documents -> /update).

Please see screenshot from local proxy. First request sent by SolrJ when in 
Cloud Mode (Solr started with ZK and -c switch, plus CloudColrClient is used). 
Second request sent when in Standalone Mode (Solr started without -c switch, 
collection created locally, HttpSolrClient is used). Third request was made by 
SOLR web UI while posting a document without ID as JSON (ID was auto-generated 
successfully).

So there is definitely some issue there uniqueKey not generating when content 
is posted as "application/javabin".


was (Author: jpro@gmail.com):
[~arafalov] I have spent some time and tried version 6.2.

Version 6.2 gives the same error, although in both cases now when using SolrJ. 
What I mean by that is that both CloudSolrClient and HttpSolrClient end up 
sending payload as "application/javabin" now (still through the same place of 
HttpSolrClient, i.e. {code}final HttpResponse response = 
httpClient.execute(method);{code} In version 5.1 HttpSolrClient (when not in 
cloud mode) was sending payload as "application/xml; charset=UTF-8" and that 
worked (generated uniqueKey) - see above.

Case with payload sent as JSON (or XML) still works fine and generates 
uniqueKey without any issues. I ran it from SOLR web interface (Collection -> 
Documents -> /update).

Please see screenshot from local proxy. First request sent by SolrJ when in 
Cloud Mode (Solr started with ZK and -c switch, plus CloudColrClient is used). 
Second request sent when in Standalone Mode (Solr started without -c switch, 
collection created locally, HttpSolrClient is used). Third request was made by 
SOLR web UI while posting a document without ID as JSON (ID was auto-generated 
successfully).

So there is definitely some issue there uniqueKey not generating when content 
is posted as "application/javabin".

> uniqueKey generation fails if content POSTed as "application/javabin".
> --
>
> Key: SOLR-9493
> URL: https://issues.apache.org/jira/browse/SOLR-9493
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yury Kartsev
> Attachments: 200.png, 400.png, Screen Shot 2016-09-11 at 16.29.50 .png
>
>
> I have faced a weird issue when the same application code (using SolrJ) fails 
> indexing a document without a unique key (should be auto-generated by SOLR) 
> in SolrCloud and succeeds indexing it in standalone SOLR instance (or even in 
> cloud mode, but from web interface of one of the replicas). Difference is 
> obviously only between clients (CloudSolrClient vs HttpSolrClient) and SOLR 
> URLs (Zokeeper hostname+port vs standalone SOLR instance hostname and port). 
> Failure is seen as "org.apache.solr.client.solrj.SolrServerException: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
> Document is missing mandatory uniqueKey field: id".
> I am using SOLR 5.1. In cloud mode I have 1 shard and 3 replicas.
> After lot of debugging and investigation (see below as well as my 
> [StackOverflow 
> post|http://stackoverflow.com/questions/39401792/uniquekey-generation-does-not-work-in-solrcloud-but-works-if-standalone])
>  I came to a conclusion that the difference in failing and succeeding calls 
> is simply content type of the POSTing requests. Local proxy clearly shows 
> that the request fails if content is sent as "application/javabin" (see 
> attached screenshot with sensitive data removed) and succeeds if content sent 
> as "application/xml; charset=UTF-8"  (see attached screenshot with sensitive 
> data removed).
> Would you be able to please assist?
> Thank you very much in advance!
> 
> Copying whole description and investigation here as well:
> 
> [Documentation|https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements]
>  states:{quote}Schema 

[jira] [Commented] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-09-12 Thread Perrin Bignoli (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1548#comment-1548
 ] 

Perrin Bignoli commented on SOLR-8097:
--

Why is the the visibility of the following constructor in CloudSolrClient:

 private CloudSolrClient(Collection zkHosts, String chroot, 
HttpClient httpClient, LBHttpSolrClient lbSolrClient,
   boolean updatesToLeaders, boolean 
directUpdatesToLeadersOnly)

set to private and not protected?

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9497) HttpSolrClient.Builder Returns Unusable Connection

2016-09-12 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484373#comment-15484373
 ] 

Shalin Shekhar Mangar commented on SOLR-9497:
-

[~wmcginnis] - This is no problem. We are only trying to separate signal from 
noise. Can you give more details on how you wrote that client? Which 
dependencies did you include? It sounds like you have self-sufficient client 
program which reproduces this problem on 6.2. Can you share that here?

> HttpSolrClient.Builder Returns Unusable Connection
> --
>
> Key: SOLR-9497
> URL: https://issues.apache.org/jira/browse/SOLR-9497
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 6.2
> Environment: Java 1.8 Mac OSX
>Reporter: Will McGinnis
>  Labels: SolrJ
> Fix For: 6.1.1
>
>
> SolrClient solr = new HttpSolrClient.Builder(urlString).build();
> Exception in thread "main" java.lang.VerifyError: Bad return type
> Exception Details:
>   Location:
>  
> org/apache/solr/client/solrj/impl/HttpClientUtil.createClient(Lorg/apache/solr/common/params/SolrParams;Lorg/apache/http/conn/ClientConnectionManager;)Lorg/apache/http/impl/client/CloseableHttpClient;
>  @58: areturn
>   Reason:
> Type 'org/apache/http/impl/client/DefaultHttpClient' (current frame, 
> stack[0]) is not assignable to 
> 'org/apache/http/impl/client/CloseableHttpClient' (from method signature)
>   Current Frame:
> bci: @58
> flags: { }
> locals: { 'org/apache/solr/common/params/SolrParams', 
> 'org/apache/http/conn/ClientConnectionManager', 
> 'org/apache/solr/common/params/ModifiableSolrParams', 
> 'org/apache/http/impl/client/DefaultHttpClient' }
> stack: { 'org/apache/http/impl/client/DefaultHttpClient' }
>   Bytecode:
> 0x000: bb00 0359 2ab7 0004 4db2 0005 b900 0601
> 0x010: 0099 001e b200 05bb 0007 59b7 0008 1209
> 0x020: b600 0a2c b600 0bb6 000c b900 0d02 002b
> 0x030: b800 104e 2d2c b800 0f2d b0
>   Stackmap Table:
> append_frame(@47,Object[#143])
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.(HttpSolrClient.java:209)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient$Builder.build(HttpSolrClient.java:874)
> I have tried upgrading to httpclient-4.5.2. This appears to create the same 
> problem. For now, I use this deprecated, connection code.
> return new HttpSolrClient(urlString, new SystemDefaultHttpClient());
> Eventually, this hangs the Solr server, because you run out of file handles.
> I suspect calling solrClient.close() is doing nothing.
> I tried not closing and using a static connection to Solr.
> This results in basically, the same problem of, eventually hanging the Solr 
> server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Index partition corrupted during a regular flush due to FileNotFoundException on DEL file

2016-09-12 Thread Erick Erickson
The del file should be present for each segment assuming it
has any documents that have been updated or deleted.

Of course if some process external to Solr removed it, you'd
get this error.

A less common reason is that your disk is full. Solr/Lucene
require that you have at least as much free space on your
disk as the index occupies. Thus if you have 10G total
disk space used up by your index, you must have at
least 10G free space, is it possible that you're running without
enough disk space?

If anything like that is the case you should see errors in your
Solr logs, assuming they haven't been rolled over. Is there
anything suspicious there? Look for ERROR (all caps) and/or
"Caused by" as a start.

Best,
Erick

On Mon, Sep 12, 2016 at 3:31 AM, 郑文兴  wrote:
> Dear all,
>
>
>
> Today we found one of our index partitions was corrupted during the regular
> flush, due to the FileNotFoundException on a del file. The followings were
> the call stacks from the corresponding exception:
>
>
>
> [2016-09-12 16:40:01,801][ERROR][qtp2107666786-40854][indexEngine ] index
> [so_blog] commit ERROR:_oxep_7fa.del
> org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:284)
> org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:303)
> org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:635)
> org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
> org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:593)
> org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3587)
> org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:3376)
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3485)
> org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3467)
> org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3451)
> org.apache.lucene.index.IndexEngine.flush(IndexEngine.java:409)
>
>
>
> My questions are:
>
> l  Does anyone know the situation here? From the file system, I can’t find
> the _oxep_7fa.del.
>
> l  How about the life cycle of the del file?
>
>
>
> Note: The Lucene Core is on 3.6.2.
>
>
>
> Appreciated for your kindly advice.
>
> Best Regards, Wenxing

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7318) Graduate StandardAnalyzer out of analyzers module into core

2016-09-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484326#comment-15484326
 ] 

Michael McCandless commented on LUCENE-7318:


+1, thanks [~thetaphi].

> Graduate StandardAnalyzer out of analyzers module into core
> ---
>
> Key: LUCENE-7318
> URL: https://issues.apache.org/jira/browse/LUCENE-7318
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: master (7.0), 6.2, 6.2.1
>
> Attachments: LUCENE-7318-backwards.patch, 
> LUCENE-7318-backwards.patch, LUCENE-7318-backwards.patch, LUCENE-7318.patch
>
>
> Spinoff from LUCENE-7314:
> {{StandardAnalyzer}} has progressed substantially since we broke out the 
> analyzers module ... it now follows a real Unicode standard (UAX #29 Unicode 
> Text Segmentation).  It's also much faster than it used to be, since it 
> switched to JFlex a while back.  Many bug fixes, etc.
> I think it would make a good default for most Lucene users, and we should 
> graduate it from the analyzers module into core, and make it the default for 
> {{IndexWriter}}.
> It's really quite crazy that users must go digging in the analyzers module to 
> get started with Lucene ... we don't make them dig through the codecs module 
> to find a good default codec ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7442) MinHashFilter's ctor should validate its args

2016-09-12 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-7442.

   Resolution: Fixed
 Assignee: Steve Rowe
Fix Version/s: 6.2.1

Committed to master, branch_6x and branch_6_2 (for inclusion in the 6.2.1 
release).  Thanks Dat!

> MinHashFilter's ctor should validate its args
> -
>
> Key: LUCENE-7442
> URL: https://issues.apache.org/jira/browse/LUCENE-7442
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: 6.2.1
>
> Attachments: LUCENE-7442.patch, LUCENE-7442.patch
>
>
> My Jenkins found this reproducing branch_6x seed:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
>[junit4]   2> Exception from random analyzer: 
>[junit4]   2> charfilters=
>[junit4]   2> tokenizer=
>[junit4]   2>   org.apache.lucene.analysis.standard.StandardTokenizer()
>[junit4]   2> filters=
>[junit4]   2>   
> org.apache.lucene.analysis.minhash.MinHashFilter(ValidatingTokenFilter@6ae99167
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,
>  5, 5, -3, true)
>[junit4]   2>   
> org.apache.lucene.analysis.bg.BulgarianStemFilter(ValidatingTokenFilter@40844352
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,keyword=false)
>[junit4]   2> offsetsAreCorrect=true
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
> -Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=4733E677EBDC28FC 
> -Dtests.slow=true -Dtests.locale=ar-OM 
> -Dtests.timezone=Atlantic/South_Georgia -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   3.18s J4 | 
> TestRandomChains.testRandomChainsWithLargeStrings <<<
>[junit4]> Throwable #1: java.util.NoSuchElementException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([4733E677EBDC28FC:2D685966B292080F]:0)
>[junit4]>  at java.util.TreeMap.key(TreeMap.java:1323)
>[junit4]>  at java.util.TreeMap.lastKey(TreeMap.java:297)
>[junit4]>  at java.util.TreeSet.last(TreeSet.java:401)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter$FixedSizeTreeSet.add(MinHashFilter.java:325)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter.incrementToken(MinHashFilter.java:159)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.bg.BulgarianStemFilter.incrementToken(BulgarianStemFilter.java:48)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:405)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:510)
>[junit4]>  at 
> org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:959)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {dummy=Lucene50(blocksize=128)}, docValues:{}, maxPointsInLeafNode=252, 
> maxMBSortInHeap=5.297834377897023, sim=ClassicSimilarity, locale=ar-OM, 
> timezone=Atlantic/South_Georgia
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=395080152,total=465567744
>[junit4]   2> NOTE: All tests run in this JVM: 
> [TestDecimalDigitFilterFactory, TestMultiWordSynonyms, 
> TestReversePathHierarchyTokenizer, TestDoubleEscape, 
> TestHunspellStemFilterFactory, TestArabicNormalizationFilter, 
> TestUAX29URLEmailAnalyzer, TestSwedishLightStemFilterFactory, 
> TestBulgarianStemmer, TestASCIIFoldingFilter, 
> TestDelimitedPayloadTokenFilterFactory, TestIndonesianStemmer, TestCircumfix, 
> EdgeNGramTokenFilterTest, TestPatternTokenizer, 
> TestScandinavianFoldingFilter, TestIgnore, TestRandomChains]
>[junit4] Completed [130/272 (1!)] on J4 in 9.85s, 2 tests, 1 error <<< 
> FAILURES!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7442) MinHashFilter's ctor should validate its args

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484288#comment-15484288
 ] 

ASF subversion and git services commented on LUCENE-7442:
-

Commit 6fb22fcf55ce2883f45da285ee97a05e7a832579 in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6fb22fc ]

LUCENE-7442: MinHashFilter's ctor should validate its args


> MinHashFilter's ctor should validate its args
> -
>
> Key: LUCENE-7442
> URL: https://issues.apache.org/jira/browse/LUCENE-7442
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Rowe
> Attachments: LUCENE-7442.patch, LUCENE-7442.patch
>
>
> My Jenkins found this reproducing branch_6x seed:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
>[junit4]   2> Exception from random analyzer: 
>[junit4]   2> charfilters=
>[junit4]   2> tokenizer=
>[junit4]   2>   org.apache.lucene.analysis.standard.StandardTokenizer()
>[junit4]   2> filters=
>[junit4]   2>   
> org.apache.lucene.analysis.minhash.MinHashFilter(ValidatingTokenFilter@6ae99167
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,
>  5, 5, -3, true)
>[junit4]   2>   
> org.apache.lucene.analysis.bg.BulgarianStemFilter(ValidatingTokenFilter@40844352
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,keyword=false)
>[junit4]   2> offsetsAreCorrect=true
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
> -Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=4733E677EBDC28FC 
> -Dtests.slow=true -Dtests.locale=ar-OM 
> -Dtests.timezone=Atlantic/South_Georgia -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   3.18s J4 | 
> TestRandomChains.testRandomChainsWithLargeStrings <<<
>[junit4]> Throwable #1: java.util.NoSuchElementException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([4733E677EBDC28FC:2D685966B292080F]:0)
>[junit4]>  at java.util.TreeMap.key(TreeMap.java:1323)
>[junit4]>  at java.util.TreeMap.lastKey(TreeMap.java:297)
>[junit4]>  at java.util.TreeSet.last(TreeSet.java:401)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter$FixedSizeTreeSet.add(MinHashFilter.java:325)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter.incrementToken(MinHashFilter.java:159)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.bg.BulgarianStemFilter.incrementToken(BulgarianStemFilter.java:48)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:405)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:510)
>[junit4]>  at 
> org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:959)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {dummy=Lucene50(blocksize=128)}, docValues:{}, maxPointsInLeafNode=252, 
> maxMBSortInHeap=5.297834377897023, sim=ClassicSimilarity, locale=ar-OM, 
> timezone=Atlantic/South_Georgia
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=395080152,total=465567744
>[junit4]   2> NOTE: All tests run in this JVM: 
> [TestDecimalDigitFilterFactory, TestMultiWordSynonyms, 
> TestReversePathHierarchyTokenizer, TestDoubleEscape, 
> TestHunspellStemFilterFactory, TestArabicNormalizationFilter, 
> TestUAX29URLEmailAnalyzer, TestSwedishLightStemFilterFactory, 
> TestBulgarianStemmer, TestASCIIFoldingFilter, 
> TestDelimitedPayloadTokenFilterFactory, TestIndonesianStemmer, TestCircumfix, 
> EdgeNGramTokenFilterTest, TestPatternTokenizer, 
> TestScandinavianFoldingFilter, TestIgnore, TestRandomChains]
>[junit4] Completed [130/272 (1!)] on J4 in 9.85s, 2 tests, 1 error <<< 
> FAILURES!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7442) MinHashFilter's ctor should validate its args

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484286#comment-15484286
 ] 

ASF subversion and git services commented on LUCENE-7442:
-

Commit 109ec23426d6d42c7cefd10ad96a56ca504e6a9a in lucene-solr's branch 
refs/heads/branch_6_2 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=109ec23 ]

LUCENE-7442: MinHashFilter's ctor should validate its args


> MinHashFilter's ctor should validate its args
> -
>
> Key: LUCENE-7442
> URL: https://issues.apache.org/jira/browse/LUCENE-7442
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Rowe
> Attachments: LUCENE-7442.patch, LUCENE-7442.patch
>
>
> My Jenkins found this reproducing branch_6x seed:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
>[junit4]   2> Exception from random analyzer: 
>[junit4]   2> charfilters=
>[junit4]   2> tokenizer=
>[junit4]   2>   org.apache.lucene.analysis.standard.StandardTokenizer()
>[junit4]   2> filters=
>[junit4]   2>   
> org.apache.lucene.analysis.minhash.MinHashFilter(ValidatingTokenFilter@6ae99167
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,
>  5, 5, -3, true)
>[junit4]   2>   
> org.apache.lucene.analysis.bg.BulgarianStemFilter(ValidatingTokenFilter@40844352
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,keyword=false)
>[junit4]   2> offsetsAreCorrect=true
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
> -Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=4733E677EBDC28FC 
> -Dtests.slow=true -Dtests.locale=ar-OM 
> -Dtests.timezone=Atlantic/South_Georgia -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   3.18s J4 | 
> TestRandomChains.testRandomChainsWithLargeStrings <<<
>[junit4]> Throwable #1: java.util.NoSuchElementException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([4733E677EBDC28FC:2D685966B292080F]:0)
>[junit4]>  at java.util.TreeMap.key(TreeMap.java:1323)
>[junit4]>  at java.util.TreeMap.lastKey(TreeMap.java:297)
>[junit4]>  at java.util.TreeSet.last(TreeSet.java:401)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter$FixedSizeTreeSet.add(MinHashFilter.java:325)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter.incrementToken(MinHashFilter.java:159)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.bg.BulgarianStemFilter.incrementToken(BulgarianStemFilter.java:48)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:405)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:510)
>[junit4]>  at 
> org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:959)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {dummy=Lucene50(blocksize=128)}, docValues:{}, maxPointsInLeafNode=252, 
> maxMBSortInHeap=5.297834377897023, sim=ClassicSimilarity, locale=ar-OM, 
> timezone=Atlantic/South_Georgia
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=395080152,total=465567744
>[junit4]   2> NOTE: All tests run in this JVM: 
> [TestDecimalDigitFilterFactory, TestMultiWordSynonyms, 
> TestReversePathHierarchyTokenizer, TestDoubleEscape, 
> TestHunspellStemFilterFactory, TestArabicNormalizationFilter, 
> TestUAX29URLEmailAnalyzer, TestSwedishLightStemFilterFactory, 
> TestBulgarianStemmer, TestASCIIFoldingFilter, 
> TestDelimitedPayloadTokenFilterFactory, TestIndonesianStemmer, TestCircumfix, 
> EdgeNGramTokenFilterTest, TestPatternTokenizer, 
> TestScandinavianFoldingFilter, TestIgnore, TestRandomChains]
>[junit4] Completed [130/272 (1!)] on J4 in 9.85s, 2 tests, 1 error <<< 
> FAILURES!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7442) MinHashFilter's ctor should validate its args

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484289#comment-15484289
 ] 

ASF subversion and git services commented on LUCENE-7442:
-

Commit 36362a2a69a30918d1f6670af208a0801909304f in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=36362a2 ]

LUCENE-7442: MinHashFilter's ctor should validate its args


> MinHashFilter's ctor should validate its args
> -
>
> Key: LUCENE-7442
> URL: https://issues.apache.org/jira/browse/LUCENE-7442
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Rowe
> Attachments: LUCENE-7442.patch, LUCENE-7442.patch
>
>
> My Jenkins found this reproducing branch_6x seed:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
>[junit4]   2> Exception from random analyzer: 
>[junit4]   2> charfilters=
>[junit4]   2> tokenizer=
>[junit4]   2>   org.apache.lucene.analysis.standard.StandardTokenizer()
>[junit4]   2> filters=
>[junit4]   2>   
> org.apache.lucene.analysis.minhash.MinHashFilter(ValidatingTokenFilter@6ae99167
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,
>  5, 5, -3, true)
>[junit4]   2>   
> org.apache.lucene.analysis.bg.BulgarianStemFilter(ValidatingTokenFilter@40844352
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,keyword=false)
>[junit4]   2> offsetsAreCorrect=true
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
> -Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=4733E677EBDC28FC 
> -Dtests.slow=true -Dtests.locale=ar-OM 
> -Dtests.timezone=Atlantic/South_Georgia -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   3.18s J4 | 
> TestRandomChains.testRandomChainsWithLargeStrings <<<
>[junit4]> Throwable #1: java.util.NoSuchElementException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([4733E677EBDC28FC:2D685966B292080F]:0)
>[junit4]>  at java.util.TreeMap.key(TreeMap.java:1323)
>[junit4]>  at java.util.TreeMap.lastKey(TreeMap.java:297)
>[junit4]>  at java.util.TreeSet.last(TreeSet.java:401)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter$FixedSizeTreeSet.add(MinHashFilter.java:325)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter.incrementToken(MinHashFilter.java:159)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.bg.BulgarianStemFilter.incrementToken(BulgarianStemFilter.java:48)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:405)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:510)
>[junit4]>  at 
> org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:959)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {dummy=Lucene50(blocksize=128)}, docValues:{}, maxPointsInLeafNode=252, 
> maxMBSortInHeap=5.297834377897023, sim=ClassicSimilarity, locale=ar-OM, 
> timezone=Atlantic/South_Georgia
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=395080152,total=465567744
>[junit4]   2> NOTE: All tests run in this JVM: 
> [TestDecimalDigitFilterFactory, TestMultiWordSynonyms, 
> TestReversePathHierarchyTokenizer, TestDoubleEscape, 
> TestHunspellStemFilterFactory, TestArabicNormalizationFilter, 
> TestUAX29URLEmailAnalyzer, TestSwedishLightStemFilterFactory, 
> TestBulgarianStemmer, TestASCIIFoldingFilter, 
> TestDelimitedPayloadTokenFilterFactory, TestIndonesianStemmer, TestCircumfix, 
> EdgeNGramTokenFilterTest, TestPatternTokenizer, 
> TestScandinavianFoldingFilter, TestIgnore, TestRandomChains]
>[junit4] Completed [130/272 (1!)] on J4 in 9.85s, 2 tests, 1 error <<< 
> FAILURES!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7442) MinHashFilter's ctor should validate its args

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484287#comment-15484287
 ] 

ASF subversion and git services commented on LUCENE-7442:
-

Commit 8066a3605ccf4b91ece20810fd435f1b5c6da44f in lucene-solr's branch 
refs/heads/branch_6_2 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8066a36 ]

LUCENE-7442: add changes entry for 6.2.1


> MinHashFilter's ctor should validate its args
> -
>
> Key: LUCENE-7442
> URL: https://issues.apache.org/jira/browse/LUCENE-7442
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Rowe
> Attachments: LUCENE-7442.patch, LUCENE-7442.patch
>
>
> My Jenkins found this reproducing branch_6x seed:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
>[junit4]   2> Exception from random analyzer: 
>[junit4]   2> charfilters=
>[junit4]   2> tokenizer=
>[junit4]   2>   org.apache.lucene.analysis.standard.StandardTokenizer()
>[junit4]   2> filters=
>[junit4]   2>   
> org.apache.lucene.analysis.minhash.MinHashFilter(ValidatingTokenFilter@6ae99167
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,
>  5, 5, -3, true)
>[junit4]   2>   
> org.apache.lucene.analysis.bg.BulgarianStemFilter(ValidatingTokenFilter@40844352
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,keyword=false)
>[junit4]   2> offsetsAreCorrect=true
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
> -Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=4733E677EBDC28FC 
> -Dtests.slow=true -Dtests.locale=ar-OM 
> -Dtests.timezone=Atlantic/South_Georgia -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   3.18s J4 | 
> TestRandomChains.testRandomChainsWithLargeStrings <<<
>[junit4]> Throwable #1: java.util.NoSuchElementException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([4733E677EBDC28FC:2D685966B292080F]:0)
>[junit4]>  at java.util.TreeMap.key(TreeMap.java:1323)
>[junit4]>  at java.util.TreeMap.lastKey(TreeMap.java:297)
>[junit4]>  at java.util.TreeSet.last(TreeSet.java:401)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter$FixedSizeTreeSet.add(MinHashFilter.java:325)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter.incrementToken(MinHashFilter.java:159)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.bg.BulgarianStemFilter.incrementToken(BulgarianStemFilter.java:48)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:405)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:510)
>[junit4]>  at 
> org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:959)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {dummy=Lucene50(blocksize=128)}, docValues:{}, maxPointsInLeafNode=252, 
> maxMBSortInHeap=5.297834377897023, sim=ClassicSimilarity, locale=ar-OM, 
> timezone=Atlantic/South_Georgia
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=395080152,total=465567744
>[junit4]   2> NOTE: All tests run in this JVM: 
> [TestDecimalDigitFilterFactory, TestMultiWordSynonyms, 
> TestReversePathHierarchyTokenizer, TestDoubleEscape, 
> TestHunspellStemFilterFactory, TestArabicNormalizationFilter, 
> TestUAX29URLEmailAnalyzer, TestSwedishLightStemFilterFactory, 
> TestBulgarianStemmer, TestASCIIFoldingFilter, 
> TestDelimitedPayloadTokenFilterFactory, TestIndonesianStemmer, TestCircumfix, 
> EdgeNGramTokenFilterTest, TestPatternTokenizer, 
> TestScandinavianFoldingFilter, TestIgnore, TestRandomChains]
>[junit4] Completed [130/272 (1!)] on J4 in 9.85s, 2 tests, 1 error <<< 
> FAILURES!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9503) NPE in Replica Placement Rules when using Overseer Role with other rules

2016-09-12 Thread Tim Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484252#comment-15484252
 ] 

Tim Owen commented on SOLR-9503:


As an aside, I noticed that `Rule.Operand.GREATER_THAN` seems to be missing an 
override for `public int compare(Object n1Val, Object n2Val)` .. but compare 
only appears to be used when sorting the live nodes, so maybe it's not a big 
deal?

> NPE in Replica Placement Rules when using Overseer Role with other rules
> 
>
> Key: SOLR-9503
> URL: https://issues.apache.org/jira/browse/SOLR-9503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Rules, SolrCloud
>Affects Versions: 6.2, master (7.0)
>Reporter: Tim Owen
> Attachments: SOLR-9503.patch
>
>
> The overseer role introduced in SOLR-9251 works well if there's only a single 
> Rule for replica placement e.g. {code}rule=role:!overseer{code} but when 
> combined with another rule, e.g. 
> {code}rule=role:!overseer=host:*,shard:*,replica:<2{code} it can result 
> in a NullPointerException (in Rule.tryAssignNodeToShard)
> This happens because the code builds up a nodeVsTags map, but it only has 
> entries for nodes that have values for *all* tags used among the rules. This 
> means not enough information is available to other rules when they are being 
> checked during replica assignment. In the example rules above, if we have a 
> cluster of 12 nodes and only 3 are given the Overseer role, the others do not 
> have any entry in the nodeVsTags map because they only have the host tag 
> value and not the role tag value.
> Looking at the code in ReplicaAssigner.getTagsForNodes, it is explicitly only 
> keeping entries that fulfil the constraint of having values for all tags used 
> in the rules. Possibly this constraint was suitable when rules were 
> originally introduced, but the Role tag (used for Overseers) is unlikely to 
> be present for all nodes in the cluster, and similarly for sysprop tags which 
> may or not be set for a node.
> My patch removes this constraint, so the nodeVsTags map contains everything 
> known about all nodes, even if they have no value for a given tag. This 
> allows the rule combination above to work, and doesn't appear to cause any 
> problems with the code paths that use the nodeVsTags map. They handle null 
> values quite well, and the tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7446) Fix back-compat version check in addVersion helper script

2016-09-12 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484243#comment-15484243
 ] 

Steve Rowe commented on LUCENE-7446:


I'll take a look.

> Fix back-compat version check in addVersion helper script
> -
>
> Key: LUCENE-7446
> URL: https://issues.apache.org/jira/browse/LUCENE-7446
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Minor
> Attachments: LUCENE-7446.patch
>
>
> As part of the 5.5.3 post-release process, I was trying to bump up the number 
> to 5.5.4 on the release branch but ran into the following error:
> {code}
> Traceback (most recent call last):
>   File "dev-tools/scripts/addVersion.py", line 246, in 
> main()
>   File "dev-tools/scripts/addVersion.py", line 221, in main
> if current_version.is_back_compat_with(c.version):
>   File 
> "/Users/anshumgupta/workspace/lucene-solr/dev-tools/scripts/scriptutil.py", 
> line 75, in is_back_compat_with
> raise Exception('Back compat check disallowed for newer version: %s < %s' 
> % (self, other))
> Exception: Back compat check disallowed for newer version: 5.5.3 < 5.5.4
> {code}
> I think the check is wrong and should be reversed. I'll post a patch that I 
> used to work around this but would be good to have more eyes on this before I 
> commit this.
> [~steve_rowe]: Can you take a look at the patch as I guess you added this 
> recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9485) {{Indexfingerprint.fromObject()}} returns wrong values if object passed was itself of type IndexFingerprint.

2016-09-12 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484231#comment-15484231
 ] 

Pushkar Raste commented on SOLR-9485:
-

Sounds reasonable

> {{Indexfingerprint.fromObject()}} returns wrong values if object passed was 
> itself of type IndexFingerprint.
> 
>
> Key: SOLR-9485
> URL: https://issues.apache.org/jira/browse/SOLR-9485
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Assignee: Noble Paul
>Priority: Minor
> Attachments: SOLR-9485.patch
>
>
> {{Indexfingerprint.fromObject()}} assumes object sent it is type of {{Map}}. 
> If it is of any other type, it simply sets {{maxVersionSpecified}} to 
> {{Long.MAX_VALUE}} and all the other attributes to {{1}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7442) MinHashFilter's ctor should validate its args

2016-09-12 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7442:
---
Summary: MinHashFilter's ctor should validate its args  (was: 
MinHashFilter.FixedSizeTreeSet.add() calls TreeSet.last() without first testing 
for emptiness, under which condition NoSuchElementException is thrown)

> MinHashFilter's ctor should validate its args
> -
>
> Key: LUCENE-7442
> URL: https://issues.apache.org/jira/browse/LUCENE-7442
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Rowe
> Attachments: LUCENE-7442.patch, LUCENE-7442.patch
>
>
> My Jenkins found this reproducing branch_6x seed:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
>[junit4]   2> Exception from random analyzer: 
>[junit4]   2> charfilters=
>[junit4]   2> tokenizer=
>[junit4]   2>   org.apache.lucene.analysis.standard.StandardTokenizer()
>[junit4]   2> filters=
>[junit4]   2>   
> org.apache.lucene.analysis.minhash.MinHashFilter(ValidatingTokenFilter@6ae99167
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,
>  5, 5, -3, true)
>[junit4]   2>   
> org.apache.lucene.analysis.bg.BulgarianStemFilter(ValidatingTokenFilter@40844352
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,keyword=false)
>[junit4]   2> offsetsAreCorrect=true
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
> -Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=4733E677EBDC28FC 
> -Dtests.slow=true -Dtests.locale=ar-OM 
> -Dtests.timezone=Atlantic/South_Georgia -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   3.18s J4 | 
> TestRandomChains.testRandomChainsWithLargeStrings <<<
>[junit4]> Throwable #1: java.util.NoSuchElementException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([4733E677EBDC28FC:2D685966B292080F]:0)
>[junit4]>  at java.util.TreeMap.key(TreeMap.java:1323)
>[junit4]>  at java.util.TreeMap.lastKey(TreeMap.java:297)
>[junit4]>  at java.util.TreeSet.last(TreeSet.java:401)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter$FixedSizeTreeSet.add(MinHashFilter.java:325)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter.incrementToken(MinHashFilter.java:159)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.bg.BulgarianStemFilter.incrementToken(BulgarianStemFilter.java:48)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:405)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:510)
>[junit4]>  at 
> org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:959)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {dummy=Lucene50(blocksize=128)}, docValues:{}, maxPointsInLeafNode=252, 
> maxMBSortInHeap=5.297834377897023, sim=ClassicSimilarity, locale=ar-OM, 
> timezone=Atlantic/South_Georgia
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=395080152,total=465567744
>[junit4]   2> NOTE: All tests run in this JVM: 
> [TestDecimalDigitFilterFactory, TestMultiWordSynonyms, 
> TestReversePathHierarchyTokenizer, TestDoubleEscape, 
> TestHunspellStemFilterFactory, TestArabicNormalizationFilter, 
> TestUAX29URLEmailAnalyzer, TestSwedishLightStemFilterFactory, 
> TestBulgarianStemmer, TestASCIIFoldingFilter, 
> TestDelimitedPayloadTokenFilterFactory, TestIndonesianStemmer, TestCircumfix, 
> EdgeNGramTokenFilterTest, TestPatternTokenizer, 
> TestScandinavianFoldingFilter, TestIgnore, TestRandomChains]
>[junit4] Completed [130/272 (1!)] on J4 in 9.85s, 2 tests, 1 error <<< 
> FAILURES!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9446) Just replicated index goes into replication recovery on leader failure even if index was not changed

2016-09-12 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484221#comment-15484221
 ] 

Pushkar Raste commented on SOLR-9446:
-

When I ran the test a couple of times, I did see that even fresh replicated 
index could become the leader. I do think that check is unnecessary for the 
test, irrespective of which node becomes the leader, we should never go into 
replication if index was unchanged.

> Just replicated index goes into replication recovery on leader failure even 
> if index was not changed
> 
>
> Key: SOLR-9446
> URL: https://issues.apache.org/jira/browse/SOLR-9446
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Reporter: Pushkar Raste
>Assignee: Noble Paul
>Priority: Minor
>
>  We noticed this issue while migrating solr index from machines {{A1, A2 and 
> A3}} to {{B1, B2, B3}}. We followed following steps (and there were no 
> updates during the migration process).
> * Index had replicas on machines {{A1, A2, A3}}. Let's say {{A1}} was the 
> leader at the time
> * We added 3 more replicas {{B1, B2 and B3}}. These nodes synced with the by 
> replication. These fresh nodes do not have tlogs.
> * We shut down one of the old nodes ({{A3}}). 
> * We then shut down the leader ({{A1}})
> * New leader got elected (let's say {{A2}}) became the new leader
> * Leader asked all the replicas to sync with it
> * Fresh nodes (ones without tlogs), first tried PeerSync but since there was 
> no frame of reference, PeerSync failed and fresh nodes fail back on to try 
> replication 
> Although replication would not copy all the segments again, it seems like we 
> can short circuit sync to put nodes back in active state as soon as possible. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9502) All writers should automatically write MapSerializable as Map

2016-09-12 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-9502:
-
Attachment: SOLR-9502.patch

> All writers should automatically write MapSerializable as Map
> -
>
> Key: SOLR-9502
> URL: https://issues.apache.org/jira/browse/SOLR-9502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9502.patch
>
>
> Move the MapSerializable class to {{o.a.s.common}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7442) MinHashFilter.FixedSizeTreeSet.add() calls TreeSet.last() without first testing for emptiness, under which condition NoSuchElementException is thrown

2016-09-12 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484199#comment-15484199
 ] 

Steve Rowe commented on LUCENE-7442:


Thanks [~caomanhdat], both seeds pass with your latest patch.  I'll commit now.

> MinHashFilter.FixedSizeTreeSet.add() calls TreeSet.last() without first 
> testing for emptiness, under which condition NoSuchElementException is thrown
> -
>
> Key: LUCENE-7442
> URL: https://issues.apache.org/jira/browse/LUCENE-7442
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Rowe
> Attachments: LUCENE-7442.patch, LUCENE-7442.patch
>
>
> My Jenkins found this reproducing branch_6x seed:
> {noformat}
>[junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
>[junit4]   2> Exception from random analyzer: 
>[junit4]   2> charfilters=
>[junit4]   2> tokenizer=
>[junit4]   2>   org.apache.lucene.analysis.standard.StandardTokenizer()
>[junit4]   2> filters=
>[junit4]   2>   
> org.apache.lucene.analysis.minhash.MinHashFilter(ValidatingTokenFilter@6ae99167
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,
>  5, 5, -3, true)
>[junit4]   2>   
> org.apache.lucene.analysis.bg.BulgarianStemFilter(ValidatingTokenFilter@40844352
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,keyword=false)
>[junit4]   2> offsetsAreCorrect=true
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
> -Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=4733E677EBDC28FC 
> -Dtests.slow=true -Dtests.locale=ar-OM 
> -Dtests.timezone=Atlantic/South_Georgia -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   3.18s J4 | 
> TestRandomChains.testRandomChainsWithLargeStrings <<<
>[junit4]> Throwable #1: java.util.NoSuchElementException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([4733E677EBDC28FC:2D685966B292080F]:0)
>[junit4]>  at java.util.TreeMap.key(TreeMap.java:1323)
>[junit4]>  at java.util.TreeMap.lastKey(TreeMap.java:297)
>[junit4]>  at java.util.TreeSet.last(TreeSet.java:401)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter$FixedSizeTreeSet.add(MinHashFilter.java:325)
>[junit4]>  at 
> org.apache.lucene.analysis.minhash.MinHashFilter.incrementToken(MinHashFilter.java:159)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.bg.BulgarianStemFilter.incrementToken(BulgarianStemFilter.java:48)
>[junit4]>  at 
> org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:405)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:510)
>[junit4]>  at 
> org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:959)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {dummy=Lucene50(blocksize=128)}, docValues:{}, maxPointsInLeafNode=252, 
> maxMBSortInHeap=5.297834377897023, sim=ClassicSimilarity, locale=ar-OM, 
> timezone=Atlantic/South_Georgia
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=395080152,total=465567744
>[junit4]   2> NOTE: All tests run in this JVM: 
> [TestDecimalDigitFilterFactory, TestMultiWordSynonyms, 
> TestReversePathHierarchyTokenizer, TestDoubleEscape, 
> TestHunspellStemFilterFactory, TestArabicNormalizationFilter, 
> TestUAX29URLEmailAnalyzer, TestSwedishLightStemFilterFactory, 
> TestBulgarianStemmer, TestASCIIFoldingFilter, 
> TestDelimitedPayloadTokenFilterFactory, TestIndonesianStemmer, TestCircumfix, 
> EdgeNGramTokenFilterTest, TestPatternTokenizer, 
> TestScandinavianFoldingFilter, TestIgnore, TestRandomChains]
>[junit4] Completed [130/272 (1!)] on J4 in 9.85s, 2 tests, 1 error <<< 
> FAILURES!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9242) Collection level backup/restore should provide a param for specifying the repository implementation it should use

2016-09-12 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484186#comment-15484186
 ] 

Hrishikesh Gadre commented on SOLR-9242:


[~varunthacker] Now that SOLR-9444 is resolved, should we close this JIRA? Or 
are there any recent test failures due to this functionality?

> Collection level backup/restore should provide a param for specifying the 
> repository implementation it should use
> -
>
> Key: SOLR-9242
> URL: https://issues.apache.org/jira/browse/SOLR-9242
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hrishikesh Gadre
>Assignee: Varun Thacker
> Fix For: 6.2, master (7.0)
>
> Attachments: 7726.log.gz, SOLR-9242.patch, SOLR-9242.patch, 
> SOLR-9242.patch, SOLR-9242.patch, SOLR-9242.patch, SOLR-9242_followup.patch, 
> SOLR-9242_followup2.patch
>
>
> SOLR-7374 provides BackupRepository interface to enable storing Solr index 
> data to a configured file-system (e.g. HDFS, local file-system etc.). This 
> JIRA is to track the work required to extend this functionality at the 
> collection level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9344) BasicAuthIntegrationTest test failures on update

2016-09-12 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved SOLR-9344.
-
Resolution: Fixed

Looks like this has finally been fixed!

> BasicAuthIntegrationTest test failures on update
> 
>
> Key: SOLR-9344
> URL: https://issues.apache.org/jira/browse/SOLR-9344
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security, Tests
>Affects Versions: trunk
>Reporter: Gregory Chanan
>Assignee: Alan Woodward
> Fix For: 6.3
>
> Attachments: SOLR-9344-httpconfigurer.patch, 
> SOLR-9344-httpconfigurer.patch, SOLR-9344.patch
>
>
> I've seen this a number of times while developing SOLR-9200 and SOLR-9324; 
> there's also a public failure here: 
> http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17372/
> {code}
> org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: IOException 
> occured when talking to server at: 
> http://127.0.0.1:45882/solr/testSolrCloudCollection_shard1_replica2
>   at 
> __randomizedtesting.SeedInfo.seed([99BB0D0378978FA8:A463A32F4079D1D8]:0)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:760)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1172)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1061)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:997)
>   at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
>   at 
> org.apache.solr.security.BasicAuthIntegrationTest.doExtraTests(BasicAuthIntegrationTest.java:193)
>   at 
> org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testCollectionCreateSearchDelete(TestMiniSolrCloudClusterBase.java:196)
>   at 
> org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testBasics(TestMiniSolrCloudClusterBase.java:79)
>   at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
> Method)
>   at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
>   at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
>   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>   at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> 

[jira] [Updated] (SOLR-9484) The modify collection API should wait for the modified properties to show up in the cluster state

2016-09-12 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-9484:
---
Attachment: SOLR-9484.patch

Initial patch for this issue.

> The modify collection API should wait for the modified properties to show up 
> in the cluster state
> -
>
> Key: SOLR-9484
> URL: https://issues.apache.org/jira/browse/SOLR-9484
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.2
>Reporter: Shalin Shekhar Mangar
>Assignee: Cao Manh Dat
>  Labels: difficulty-easy, impact-medium
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9484.patch
>
>
> The modify collection API doesn't wait for the updated properties to show up 
> in the cluster state. Say you increase the maxShardsPerNode for a collection 
> using this API, if you try to add a replica immediately after modify 
> collection API returns then the overseer sometimes doesn't see the updated 
> property and refuses to add a new replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6113 - Still Unstable!

2016-09-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6113/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:63542/forceleader_test_collection_shard1_replica3]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:63542/forceleader_test_collection_shard1_replica3]
at 
__randomizedtesting.SeedInfo.seed([6EFCE4A230C865B5:886BD062094A9CD4]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:769)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1161)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1050)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:992)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741)
at 
org.apache.solr.cloud.ForceLeaderTest.sendDoc(ForceLeaderTest.java:424)
at 
org.apache.solr.cloud.ForceLeaderTest.assertSendDocFails(ForceLeaderTest.java:315)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-9408) Add solr commit data in TreeMergeRecordWriter

2016-09-12 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484093#comment-15484093
 ] 

Shalin Shekhar Mangar commented on SOLR-9408:
-

This seems unrelated to the change here. I have seen this test fail on jenkins 
as well with the message "soft wasn't fast enough". I think we can safely 
ignore it. The rest of the patch looks good and we should commit it for 6.2.1

> Add solr commit data in TreeMergeRecordWriter
> -
>
> Key: SOLR-9408
> URL: https://issues.apache.org/jira/browse/SOLR-9408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - MapReduce
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: mapreduce, solrcloud
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9408.patch, SOLR-9408.patch
>
>
> The lucene index produced by TreeMergeRecordWriter when the segments are 
> merged doesn't contain Solr's commit data, specifically, commitTimeMsec.
> This means that when this index is subsequently loaded into SolrCloud and if 
> the index stays unchanged so no newer commits occurs, ADDREPLICA will appear 
> to succeed but will not actually do any full sync due to SOLR-9369, resulting 
> in adding an empty index as a replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9408) Add solr commit data in TreeMergeRecordWriter

2016-09-12 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484074#comment-15484074
 ] 

Varun Thacker commented on SOLR-9408:
-

I ran into this failure a couple of times in my testing but it doesn't fail all 
the time . I'll dig into it this week to see whats the issue 

{{ant test  -Dtestcase=SoftAutoCommitTest 
-Dtests.method=testSoftAndHardCommitMaxTimeMixedAdds 
-Dtests.seed=F1E9CC578C23E178 -Dtests.slow=true -Dtests.locale=sr-Latn-RS 
-Dtests.timezone=America/Argentina/Salta -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8}}

> Add solr commit data in TreeMergeRecordWriter
> -
>
> Key: SOLR-9408
> URL: https://issues.apache.org/jira/browse/SOLR-9408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - MapReduce
>Reporter: Jessica Cheng Mallet
>Assignee: Varun Thacker
>  Labels: mapreduce, solrcloud
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9408.patch, SOLR-9408.patch
>
>
> The lucene index produced by TreeMergeRecordWriter when the segments are 
> merged doesn't contain Solr's commit data, specifically, commitTimeMsec.
> This means that when this index is subsequently loaded into SolrCloud and if 
> the index stays unchanged so no newer commits occurs, ADDREPLICA will appear 
> to succeed but will not actually do any full sync due to SOLR-9369, resulting 
> in adding an empty index as a replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release PyLucene 6.2.0 (rc2)

2016-09-12 Thread Andi Vajda

> On Sep 12, 2016, at 14:56, Jan Høydahl  wrote:
> 
> I downloaded and installed JDK from Oracle. And point to it with JAVA_HOME

Not at my computer right now so can't be too specific but setting JAVA_HOME is 
not good enough. You need to run some script from Oracle to make this java 
version the default for your OS.
A quick search found this for java 7: 
http://docs.oracle.com/javase/7/docs/webnotes/install/mac/mac-jdk.html
I remember doing something similar for java 8.

Andi..

> 
> Looks to me that in setup.py, LFLAGS get populated with correct -rpath to the 
> lib folder including libjava.dylib but I’m not that into make, so I’m afraid 
> I’m stuck and cannot vote on the release yet...
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> 
>> 12. sep. 2016 kl. 14.26 skrev Andi Vajda :
>> 
>> 
>> 
>> On Sep 12, 2016, at 13:25, Jan Høydahl  wrote:
 It looks like JCC can't find java (failing to find libjava.dylib). Which 
 java did you install ?
>>> Java 1.8.0_102
 When building JCC, the java version (library or franework) is reported. 
 What did it say ?
>>> 
>>> found JAVAHOME = 
>>> /Library/Java/JavaVirtualMachines/jdk1.8.0_102.jdk/Contents/Home
>> 
>> Well, I haven't tried this combination of OS and Java before. You need to 
>> find where libjava.dylib actually is and see that the settngs in JCC's 
>> setup.py file are correct and correspond to the path and name of that 
>> library.
>> 
>> Is this a java you installed or did it come with that macOS beta ?
>> 
>> Andi..
>> 
>>> 
 You also need to make sure JCC is built in shared mode and use a moderm 
 setuptools (version >= 8).
>>> pip install --upgrade setuptools
>>> 
>>> Successfully uninstalled setuptools-23.1.0
>>> Successfully installed setuptools-27.1.2
>>> 
>>> Tried again with updated setuptools, same result.
>>> /usr/local/opt/python/bin/python2.7: 
>>> dlopen(/usr/local/lib/python2.7/site-packages/JCC-2.22-py2.7-macosx-10.12-x86_64.egg/jcc/_jcc.so,
>>>  2): Library not loaded: @rpath/libjava.dylib
>>> 
>>> Makefile settings:
>>> PREFIX_PYTHON=/usr/local/Cellar/python/2.7.12
>>> ANT=ant
>>> PYTHON=$(PREFIX_PYTHON)/bin/python
>>> JCC=$(PYTHON) -m jcc.__main__ --shared --arch x86_64
>>> NUM_FILES=8
>>> 
>>> GNU Make 3.81
>>> 
>>> --
>>> Jan Høydahl, search solution architect
>>> Cominvent AS - www.cominvent.com
>>> 
> 12. sep. 2016 kl. 12.04 skrev Andi Vajda :
> 
> 
> On Sep 12, 2016, at 09:54, Jan Høydahl  wrote:
> 
> I’m trying to test on my mac.
> 
> Successfully built and installed JCC.
> Trying to build pylucene, “make” fails with this error:
> 
>> BUILD SUCCESSFUL
>> Total time: 1 second
>> ICU not installed
>> /usr/local/Cellar/python/2.7.12/bin/python -m jcc.__main__ --shared 
>> --arch x86_64 --jar 
>> lucene-java-6.2.0/lucene/build/core/lucene-core-6.2.0.jar --jar 
>> lucene-java-6.2.0/lucene/build/analysis/common/lucene-analyzers-common-6.2.0.jar
>>  --jar lucene-java-6.2.0/lucene/build/memory/lucene-memory-6.2.0.jar 
>> --jar 
>> lucene-java-6.2.0/lucene/build/highlighter/lucene-highlighter-6.2.0.jar 
>> --jar build/jar/extensions.jar --jar 
>> lucene-java-6.2.0/lucene/build/queries/lucene-queries-6.2.0.jar --jar 
>> lucene-java-6.2.0/lucene/build/queryparser/lucene-queryparser-6.2.0.jar 
>> --jar lucene-java-6.2.0/lucene/build/sandbox/lucene-sandbox-6.2.0.jar 
>> --jar 
>> lucene-java-6.2.0/lucene/build/analysis/stempel/lucene-analyzers-stempel-6.2.0.jar
>>  --jar lucene-java-6.2.0/lucene/build/grouping/lucene-grouping-6.2.0.jar 
>> --jar lucene-java-6.2.0/lucene/build/join/lucene-join-6.2.0.jar --jar 
>> lucene-java-6.2.0/lucene/build/facet/lucene-facet-6.2.0.jar --jar 
>> lucene-java-6.2.0/lucene/build/suggest/lucene-suggest-6.2.0.jar --jar 
>> lucene-java-6.2.0/lucene/build/expressions/lucene-expressions-6.2.0.jar 
>> --jar 
>> lucene-java-6.2.0/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-6.2.0.jar
>>  --jar lucene-java-6.2.0/lucene/build/misc/lucene-misc-6.2.0.jar  
>> --use_full_names --include 
>> lucene-java-6.2.0/lucene/expressions/lib/antlr4-runtime-4.5.1-1.jar 
>> --include lucene-java-6.2.0/lucene/expressions/lib/asm-5.1.jar --include 
>> lucene-java-6.2.0/lucene/expressions/lib/asm-commons-5.1.jar --package 
>> java.lang java.lang.System java.lang.Runtime --package java.util 
>> java.util.Arrays java.util.Collections java.util.HashMap 
>> java.util.HashSet java.util.TreeSet java.lang.IllegalStateException 
>> java.lang.IndexOutOfBoundsException java.util.NoSuchElementException 
>> java.text.SimpleDateFormat java.text.DecimalFormat java.text.Collator 
>> --package java.util.concurrent java.util.concurrent.Executors --package 
>> java.util.regex --package java.io 

[jira] [Updated] (SOLR-9503) NPE in Replica Placement Rules when using Overseer Role with other rules

2016-09-12 Thread Tim Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Owen updated SOLR-9503:
---
Attachment: SOLR-9503.patch

> NPE in Replica Placement Rules when using Overseer Role with other rules
> 
>
> Key: SOLR-9503
> URL: https://issues.apache.org/jira/browse/SOLR-9503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Rules, SolrCloud
>Affects Versions: 6.2, master (7.0)
>Reporter: Tim Owen
> Attachments: SOLR-9503.patch
>
>
> The overseer role introduced in SOLR-9251 works well if there's only a single 
> Rule for replica placement e.g. {code}rule=role:!overseer{code} but when 
> combined with another rule, e.g. 
> {code}rule=role:!overseer=host:*,shard:*,replica:<2{code} it can result 
> in a NullPointerException (in Rule.tryAssignNodeToShard)
> This happens because the code builds up a nodeVsTags map, but it only has 
> entries for nodes that have values for *all* tags used among the rules. This 
> means not enough information is available to other rules when they are being 
> checked during replica assignment. In the example rules above, if we have a 
> cluster of 12 nodes and only 3 are given the Overseer role, the others do not 
> have any entry in the nodeVsTags map because they only have the host tag 
> value and not the role tag value.
> Looking at the code in ReplicaAssigner.getTagsForNodes, it is explicitly only 
> keeping entries that fulfil the constraint of having values for all tags used 
> in the rules. Possibly this constraint was suitable when rules were 
> originally introduced, but the Role tag (used for Overseers) is unlikely to 
> be present for all nodes in the cluster, and similarly for sysprop tags which 
> may or not be set for a node.
> My patch removes this constraint, so the nodeVsTags map contains everything 
> known about all nodes, even if they have no value for a given tag. This 
> allows the rule combination above to work, and doesn't appear to cause any 
> problems with the code paths that use the nodeVsTags map. They handle null 
> values quite well, and the tests pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9503) NPE in Replica Placement Rules when using Overseer Role with other rules

2016-09-12 Thread Tim Owen (JIRA)
Tim Owen created SOLR-9503:
--

 Summary: NPE in Replica Placement Rules when using Overseer Role 
with other rules
 Key: SOLR-9503
 URL: https://issues.apache.org/jira/browse/SOLR-9503
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Rules, SolrCloud
Affects Versions: 6.2, master (7.0)
Reporter: Tim Owen


The overseer role introduced in SOLR-9251 works well if there's only a single 
Rule for replica placement e.g. {code}rule=role:!overseer{code} but when 
combined with another rule, e.g. 
{code}rule=role:!overseer=host:*,shard:*,replica:<2{code} it can result in 
a NullPointerException (in Rule.tryAssignNodeToShard)

This happens because the code builds up a nodeVsTags map, but it only has 
entries for nodes that have values for *all* tags used among the rules. This 
means not enough information is available to other rules when they are being 
checked during replica assignment. In the example rules above, if we have a 
cluster of 12 nodes and only 3 are given the Overseer role, the others do not 
have any entry in the nodeVsTags map because they only have the host tag value 
and not the role tag value.

Looking at the code in ReplicaAssigner.getTagsForNodes, it is explicitly only 
keeping entries that fulfil the constraint of having values for all tags used 
in the rules. Possibly this constraint was suitable when rules were originally 
introduced, but the Role tag (used for Overseers) is unlikely to be present for 
all nodes in the cluster, and similarly for sysprop tags which may or not be 
set for a node.

My patch removes this constraint, so the nodeVsTags map contains everything 
known about all nodes, even if they have no value for a given tag. This allows 
the rule combination above to work, and doesn't appear to cause any problems 
with the code paths that use the nodeVsTags map. They handle null values quite 
well, and the tests pass.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7318) Graduate StandardAnalyzer out of analyzers module into core

2016-09-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484049#comment-15484049
 ] 

Uwe Schindler commented on LUCENE-7318:
---

I fixed the test comment locally, won't upload new patch!

> Graduate StandardAnalyzer out of analyzers module into core
> ---
>
> Key: LUCENE-7318
> URL: https://issues.apache.org/jira/browse/LUCENE-7318
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: master (7.0), 6.2, 6.2.1
>
> Attachments: LUCENE-7318-backwards.patch, 
> LUCENE-7318-backwards.patch, LUCENE-7318-backwards.patch, LUCENE-7318.patch
>
>
> Spinoff from LUCENE-7314:
> {{StandardAnalyzer}} has progressed substantially since we broke out the 
> analyzers module ... it now follows a real Unicode standard (UAX #29 Unicode 
> Text Segmentation).  It's also much faster than it used to be, since it 
> switched to JFlex a while back.  Many bug fixes, etc.
> I think it would make a good default for most Lucene users, and we should 
> graduate it from the analyzers module into core, and make it the default for 
> {{IndexWriter}}.
> It's really quite crazy that users must go digging in the analyzers module to 
> get started with Lucene ... we don't make them dig through the codecs module 
> to find a good default codec ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7318) Graduate StandardAnalyzer out of analyzers module into core

2016-09-12 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-7318:
--
Attachment: LUCENE-7318-backwards.patch

New patch with all classes deprecated, except StopFilter and LowercaseFilter 
that stay at their original location for better documentation and consistency 
with Filter factory. Documentation for this was added.

I will commit this later to 6.x and 6.2 branch, and forward-port the 2 filters 
to master, so we also get good documentation in master.

The remaining stuff can be discussed in the other issue, which is now unrelated.

> Graduate StandardAnalyzer out of analyzers module into core
> ---
>
> Key: LUCENE-7318
> URL: https://issues.apache.org/jira/browse/LUCENE-7318
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>Priority: Blocker
> Fix For: master (7.0), 6.2, 6.2.1
>
> Attachments: LUCENE-7318-backwards.patch, 
> LUCENE-7318-backwards.patch, LUCENE-7318-backwards.patch, LUCENE-7318.patch
>
>
> Spinoff from LUCENE-7314:
> {{StandardAnalyzer}} has progressed substantially since we broke out the 
> analyzers module ... it now follows a real Unicode standard (UAX #29 Unicode 
> Text Segmentation).  It's also much faster than it used to be, since it 
> switched to JFlex a while back.  Many bug fixes, etc.
> I think it would make a good default for most Lucene users, and we should 
> graduate it from the analyzers module into core, and make it the default for 
> {{IndexWriter}}.
> It's really quite crazy that users must go digging in the analyzers module to 
> get started with Lucene ... we don't make them dig through the codecs module 
> to find a good default codec ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release PyLucene 6.2.0 (rc2)

2016-09-12 Thread Jan Høydahl
I downloaded and installed JDK from Oracle. And point to it with JAVA_HOME

Looks to me that in setup.py, LFLAGS get populated with correct -rpath to the 
lib folder including libjava.dylib but I’m not that into make, so I’m afraid 
I’m stuck and cannot vote on the release yet...

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 12. sep. 2016 kl. 14.26 skrev Andi Vajda :
> 
> 
> 
> On Sep 12, 2016, at 13:25, Jan Høydahl  wrote:
>>> It looks like JCC can't find java (failing to find libjava.dylib). Which 
>>> java did you install ?
>> Java 1.8.0_102
>>> When building JCC, the java version (library or franework) is reported. 
>>> What did it say ?
>> 
>> found JAVAHOME = 
>> /Library/Java/JavaVirtualMachines/jdk1.8.0_102.jdk/Contents/Home
> 
> Well, I haven't tried this combination of OS and Java before. You need to 
> find where libjava.dylib actually is and see that the settngs in JCC's 
> setup.py file are correct and correspond to the path and name of that library.
> 
> Is this a java you installed or did it come with that macOS beta ?
> 
> Andi..
> 
>> 
>>> You also need to make sure JCC is built in shared mode and use a moderm 
>>> setuptools (version >= 8).
>> pip install --upgrade setuptools
>> 
>> Successfully uninstalled setuptools-23.1.0
>> Successfully installed setuptools-27.1.2
>> 
>> Tried again with updated setuptools, same result.
>> /usr/local/opt/python/bin/python2.7: 
>> dlopen(/usr/local/lib/python2.7/site-packages/JCC-2.22-py2.7-macosx-10.12-x86_64.egg/jcc/_jcc.so,
>>  2): Library not loaded: @rpath/libjava.dylib
>> 
>> Makefile settings:
>> PREFIX_PYTHON=/usr/local/Cellar/python/2.7.12
>> ANT=ant
>> PYTHON=$(PREFIX_PYTHON)/bin/python
>> JCC=$(PYTHON) -m jcc.__main__ --shared --arch x86_64
>> NUM_FILES=8
>> 
>> GNU Make 3.81
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>> 
 12. sep. 2016 kl. 12.04 skrev Andi Vajda :
 
 
 On Sep 12, 2016, at 09:54, Jan Høydahl  wrote:
 
 I’m trying to test on my mac.
 
 Successfully built and installed JCC.
 Trying to build pylucene, “make” fails with this error:
 
> BUILD SUCCESSFUL
> Total time: 1 second
> ICU not installed
> /usr/local/Cellar/python/2.7.12/bin/python -m jcc.__main__ --shared 
> --arch x86_64 --jar 
> lucene-java-6.2.0/lucene/build/core/lucene-core-6.2.0.jar --jar 
> lucene-java-6.2.0/lucene/build/analysis/common/lucene-analyzers-common-6.2.0.jar
>  --jar lucene-java-6.2.0/lucene/build/memory/lucene-memory-6.2.0.jar 
> --jar 
> lucene-java-6.2.0/lucene/build/highlighter/lucene-highlighter-6.2.0.jar 
> --jar build/jar/extensions.jar --jar 
> lucene-java-6.2.0/lucene/build/queries/lucene-queries-6.2.0.jar --jar 
> lucene-java-6.2.0/lucene/build/queryparser/lucene-queryparser-6.2.0.jar 
> --jar lucene-java-6.2.0/lucene/build/sandbox/lucene-sandbox-6.2.0.jar 
> --jar 
> lucene-java-6.2.0/lucene/build/analysis/stempel/lucene-analyzers-stempel-6.2.0.jar
>  --jar lucene-java-6.2.0/lucene/build/grouping/lucene-grouping-6.2.0.jar 
> --jar lucene-java-6.2.0/lucene/build/join/lucene-join-6.2.0.jar --jar 
> lucene-java-6.2.0/lucene/build/facet/lucene-facet-6.2.0.jar --jar 
> lucene-java-6.2.0/lucene/build/suggest/lucene-suggest-6.2.0.jar --jar 
> lucene-java-6.2.0/lucene/build/expressions/lucene-expressions-6.2.0.jar 
> --jar 
> lucene-java-6.2.0/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-6.2.0.jar
>  --jar lucene-java-6.2.0/lucene/build/misc/lucene-misc-6.2.0.jar  
> --use_full_names --include 
> lucene-java-6.2.0/lucene/expressions/lib/antlr4-runtime-4.5.1-1.jar 
> --include lucene-java-6.2.0/lucene/expressions/lib/asm-5.1.jar --include 
> lucene-java-6.2.0/lucene/expressions/lib/asm-commons-5.1.jar --package 
> java.lang java.lang.System java.lang.Runtime --package java.util 
> java.util.Arrays java.util.Collections java.util.HashMap 
> java.util.HashSet java.util.TreeSet java.lang.IllegalStateException 
> java.lang.IndexOutOfBoundsException java.util.NoSuchElementException 
> java.text.SimpleDateFormat java.text.DecimalFormat java.text.Collator 
> --package java.util.concurrent java.util.concurrent.Executors --package 
> java.util.regex --package java.io java.io.StringReader --package 
> java.nio.file java.nio.file.Path java.nio.file.Files java.nio.file.Paths 
> --exclude 
> org.apache.lucene.sandbox.queries.regex.JakartaRegexpCapabilities 
> --exclude org.apache.regexp.RegexpTunnel --exclude 
> org.apache.lucene.store.WindowsDirectory --exclude 
> org.apache.lucene.store.NativePosixUtil --python lucene --mapping 
> org.apache.lucene.document.Document 
> 'get:(Ljava/lang/String;)Ljava/lang/String;' --mapping 
> java.util.Properties 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 17806 - Unstable!

2016-09-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17806/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.util.TestSolrCLIRunExample.testInteractiveSolrCloudExample

Error Message:
Expected 10 to be found in the testCloudExamplePrompt collection but only found 
9

Stack Trace:
java.lang.AssertionError: Expected 10 to be found in the testCloudExamplePrompt 
collection but only found 9
at 
__randomizedtesting.SeedInfo.seed([35386553C80D07AC:EE498599FF78C2CA]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.util.TestSolrCLIRunExample.testInteractiveSolrCloudExample(TestSolrCLIRunExample.java:457)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11255 lines...]
   [junit4] Suite: org.apache.solr.util.TestSolrCLIRunExample
   [junit4]   2> Creating dataDir: 

Re: VOTE: Solr Ref Guide for 6.2, RC1

2016-09-12 Thread Cassandra Targett
Thanks everyone! This vote has passed & I'll finish up the release
process today.

On Mon, Sep 12, 2016 at 7:10 AM, Mikhail Khludnev  wrote:
> +1
>
> On Mon, Sep 12, 2016 at 3:59 AM, Shalin Shekhar Mangar
>  wrote:
>>
>> +1
>>
>> On Thu, Sep 8, 2016 at 9:04 PM, Cassandra Targett 
>> wrote:
>>>
>>> After a respin, please VOTE to release the Apache Solr Reference Guide
>>> for 6.2.
>>>
>>> The artifacts are available at:
>>> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-6.2-RC1/.
>>>
>>> $ more apache-solr-ref-guide-6.2.pdf.sha1
>>> b070de9fb7806795cd1d55f1dd15d0a5a374d0b2  apache-solr-ref-guide-6.2.pdf
>>>
>>> Here's my +1.
>>>
>>> Thanks,
>>> Cassandra
>>
>>
>>
>>
>> --
>> Regards,
>> Shalin Shekhar Mangar.
>
>
>
>
> --
> Sincerely yours
> Mikhail Khludnev

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release PyLucene 6.2.0 (rc2)

2016-09-12 Thread Andi Vajda


On Sep 12, 2016, at 13:25, Jan Høydahl  wrote:
>> It looks like JCC can't find java (failing to find libjava.dylib). Which 
>> java did you install ?
> Java 1.8.0_102
>> When building JCC, the java version (library or franework) is reported. What 
>> did it say ?
> 
> found JAVAHOME = 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_102.jdk/Contents/Home

Well, I haven't tried this combination of OS and Java before. You need to find 
where libjava.dylib actually is and see that the settngs in JCC's setup.py file 
are correct and correspond to the path and name of that library.

Is this a java you installed or did it come with that macOS beta ?

Andi..

> 
>> You also need to make sure JCC is built in shared mode and use a moderm 
>> setuptools (version >= 8).
> pip install --upgrade setuptools
> 
> Successfully uninstalled setuptools-23.1.0
> Successfully installed setuptools-27.1.2
> 
> Tried again with updated setuptools, same result.
> /usr/local/opt/python/bin/python2.7: 
> dlopen(/usr/local/lib/python2.7/site-packages/JCC-2.22-py2.7-macosx-10.12-x86_64.egg/jcc/_jcc.so,
>  2): Library not loaded: @rpath/libjava.dylib
> 
> Makefile settings:
> PREFIX_PYTHON=/usr/local/Cellar/python/2.7.12
> ANT=ant
> PYTHON=$(PREFIX_PYTHON)/bin/python
> JCC=$(PYTHON) -m jcc.__main__ --shared --arch x86_64
> NUM_FILES=8
> 
> GNU Make 3.81
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> 
>>> 12. sep. 2016 kl. 12.04 skrev Andi Vajda :
>>> 
>>> 
>>> On Sep 12, 2016, at 09:54, Jan Høydahl  wrote:
>>> 
>>> I’m trying to test on my mac.
>>> 
>>> Successfully built and installed JCC.
>>> Trying to build pylucene, “make” fails with this error:
>>> 
 BUILD SUCCESSFUL
 Total time: 1 second
 ICU not installed
 /usr/local/Cellar/python/2.7.12/bin/python -m jcc.__main__ --shared --arch 
 x86_64 --jar lucene-java-6.2.0/lucene/build/core/lucene-core-6.2.0.jar 
 --jar 
 lucene-java-6.2.0/lucene/build/analysis/common/lucene-analyzers-common-6.2.0.jar
  --jar lucene-java-6.2.0/lucene/build/memory/lucene-memory-6.2.0.jar --jar 
 lucene-java-6.2.0/lucene/build/highlighter/lucene-highlighter-6.2.0.jar 
 --jar build/jar/extensions.jar --jar 
 lucene-java-6.2.0/lucene/build/queries/lucene-queries-6.2.0.jar --jar 
 lucene-java-6.2.0/lucene/build/queryparser/lucene-queryparser-6.2.0.jar 
 --jar lucene-java-6.2.0/lucene/build/sandbox/lucene-sandbox-6.2.0.jar 
 --jar 
 lucene-java-6.2.0/lucene/build/analysis/stempel/lucene-analyzers-stempel-6.2.0.jar
  --jar lucene-java-6.2.0/lucene/build/grouping/lucene-grouping-6.2.0.jar 
 --jar lucene-java-6.2.0/lucene/build/join/lucene-join-6.2.0.jar --jar 
 lucene-java-6.2.0/lucene/build/facet/lucene-facet-6.2.0.jar --jar 
 lucene-java-6.2.0/lucene/build/suggest/lucene-suggest-6.2.0.jar --jar 
 lucene-java-6.2.0/lucene/build/expressions/lucene-expressions-6.2.0.jar 
 --jar 
 lucene-java-6.2.0/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-6.2.0.jar
  --jar lucene-java-6.2.0/lucene/build/misc/lucene-misc-6.2.0.jar  
 --use_full_names --include 
 lucene-java-6.2.0/lucene/expressions/lib/antlr4-runtime-4.5.1-1.jar 
 --include lucene-java-6.2.0/lucene/expressions/lib/asm-5.1.jar --include 
 lucene-java-6.2.0/lucene/expressions/lib/asm-commons-5.1.jar --package 
 java.lang java.lang.System java.lang.Runtime --package java.util 
 java.util.Arrays java.util.Collections java.util.HashMap java.util.HashSet 
 java.util.TreeSet java.lang.IllegalStateException 
 java.lang.IndexOutOfBoundsException java.util.NoSuchElementException 
 java.text.SimpleDateFormat java.text.DecimalFormat java.text.Collator 
 --package java.util.concurrent java.util.concurrent.Executors --package 
 java.util.regex --package java.io java.io.StringReader --package 
 java.nio.file java.nio.file.Path java.nio.file.Files java.nio.file.Paths 
 --exclude 
 org.apache.lucene.sandbox.queries.regex.JakartaRegexpCapabilities 
 --exclude org.apache.regexp.RegexpTunnel --exclude 
 org.apache.lucene.store.WindowsDirectory --exclude 
 org.apache.lucene.store.NativePosixUtil --python lucene --mapping 
 org.apache.lucene.document.Document 
 'get:(Ljava/lang/String;)Ljava/lang/String;' --mapping 
 java.util.Properties 'getProperty:(Ljava/lang/String;)Ljava/lang/String;' 
 --sequence java.util.AbstractList 'size:()I' 'get:(I)Ljava/lang/Object;' 
 org.apache.lucene.index.IndexWriter:getReader --version 6.2.0 --module 
 python/collections.py --module python/ICUNormalizer2Filter.py --module 
 python/ICUFoldingFilter.py --module python/ICUTransformFilter.py  --files 
 8 --build 
 /usr/local/opt/python/bin/python2.7: 
 dlopen(/usr/local/lib/python2.7/site-packages/JCC-2.22-py2.7-macosx-10.12-x86_64.egg/jcc/_jcc.so,
  2): Library not loaded: 

Re: VOTE: Solr Ref Guide for 6.2, RC1

2016-09-12 Thread Mikhail Khludnev
+1

On Mon, Sep 12, 2016 at 3:59 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:

> +1
>
> On Thu, Sep 8, 2016 at 9:04 PM, Cassandra Targett 
> wrote:
>
>> After a respin, please VOTE to release the Apache Solr Reference Guide
>> for 6.2.
>>
>> The artifacts are available at: https://dist.apache.org/repos/
>> dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-6.2-RC1/.
>>
>> $ more apache-solr-ref-guide-6.2.pdf.sha1
>> b070de9fb7806795cd1d55f1dd15d0a5a374d0b2  apache-solr-ref-guide-6.2.pdf
>>
>> Here's my +1.
>>
>> Thanks,
>> Cassandra
>>
>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>



-- 
Sincerely yours
Mikhail Khludnev


[jira] [Resolved] (LUCENE-7440) Document skipping on large indexes is broken

2016-09-12 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved LUCENE-7440.
---
Resolution: Fixed
  Assignee: Yonik Seeley

> Document skipping on large indexes is broken
> 
>
> Key: LUCENE-7440
> URL: https://issues.apache.org/jira/browse/LUCENE-7440
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 2.2
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Critical
> Fix For: master (7.0), 6.3, 6.2.1
>
> Attachments: LUCENE-7440.patch, LUCENE-7440.patch
>
>
> Large skips on large indexes fail.
> Anything that uses skips (such as a boolean query, filtered queries, faceted 
> queries, join queries, etc) can trigger this bug on a sufficiently large 
> index.
> The bug is a numeric overflow in MultiLevelSkipList that has been present 
> since inception (Lucene 2.2).  It may not manifest until one has a single 
> segment with more than ~1.8B documents, and a large skip is performed on that 
> segment.
> Typical stack trace on Lucene7-dev:
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 110
>   at 
> org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:297)
>   at org.apache.lucene.store.DataInput.readVInt(DataInput.java:125)
>   at 
> org.apache.lucene.codecs.lucene50.Lucene50SkipReader.readSkipData(Lucene50SkipReader.java:180)
>   at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:163)
>   at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:133)
>   at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader$BlockDocsEnum.advance(Lucene50PostingsReader.java:421)
>   at YCS_skip7$1.testSkip(YCS_skip7.java:307)
> {code}
> Typical stack trace on Lucene4.10.3:
> {code}
> 6-08-31 18:57:17,460 ERROR org.apache.solr.servlet.SolrDispatchFilter: 
> null:java.lang.ArrayIndexOutOfBoundsException: 75
>  at 
> org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:301)
>  at org.apache.lucene.store.DataInput.readVInt(DataInput.java:122)
>  at 
> org.apache.lucene.codecs.lucene41.Lucene41SkipReader.readSkipData(Lucene41SkipReader.java:194)
>  at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:168)
>  at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:138)
>  at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.advance(Lucene41PostingsReader.java:506)
>  at org.apache.lucene.search.TermScorer.advance(TermScorer.java:85)
> [...]
>  at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
> [...]
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2004)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7417) Highlighting fails for MultiPhraseQuery's with one clause

2016-09-12 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved LUCENE-7417.
---
Resolution: Fixed

> Highlighting fails for MultiPhraseQuery's with one clause
> -
>
> Key: LUCENE-7417
> URL: https://issues.apache.org/jira/browse/LUCENE-7417
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.2.1, 5.x, 5.5.2
>Reporter: Thomas Kappler
>Assignee: David Smiley
> Fix For: 6.3, 5.5.4, 6.2.1
>
> Attachments: multiphrasequery_singleclause_highlighting.patch
>
>
> This bug is the same issue as LUCENE-7231, just for MultiPhraseQuery instead 
> of PhraseQuery. The fix is the same as well. To reproduce, change the test 
> that was committed for LUCENE-7231 to use a MultiPhraseQuery. It results in 
> the same error
> {{java.lang.IllegalArgumentException: Less than 2 subSpans.size():1}}
> I have a patch including a test against branch_5.x, it just needs to go 
> through legal before I can post it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7417) Highlighting fails for MultiPhraseQuery's with one clause

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483924#comment-15483924
 ] 

ASF subversion and git services commented on LUCENE-7417:
-

Commit cddeb9dc3c8322b4149b910f509a93be37f5c17b in lucene-solr's branch 
refs/heads/branch_6_2 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cddeb9d ]

LUCENE-7417: Highlighter WSTE didn't handle single-term MultiPhraseQuery.
Also updated to Java 5 for-each in this method.

(cherry picked from commit 3966f99)

(cherry picked from commit 514bb1b)


> Highlighting fails for MultiPhraseQuery's with one clause
> -
>
> Key: LUCENE-7417
> URL: https://issues.apache.org/jira/browse/LUCENE-7417
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.2.1, 5.x, 5.5.2
>Reporter: Thomas Kappler
>Assignee: David Smiley
> Fix For: 6.3, 5.5.4, 6.2.1
>
> Attachments: multiphrasequery_singleclause_highlighting.patch
>
>
> This bug is the same issue as LUCENE-7231, just for MultiPhraseQuery instead 
> of PhraseQuery. The fix is the same as well. To reproduce, change the test 
> that was committed for LUCENE-7231 to use a MultiPhraseQuery. It results in 
> the same error
> {{java.lang.IllegalArgumentException: Less than 2 subSpans.size():1}}
> I have a patch including a test against branch_5.x, it just needs to go 
> through legal before I can post it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: Solr Ref Guide for 6.2, RC1

2016-09-12 Thread Varun Thacker
+1

On Mon, Sep 12, 2016 at 6:29 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:

> +1
>
> On Thu, Sep 8, 2016 at 9:04 PM, Cassandra Targett 
> wrote:
>
>> After a respin, please VOTE to release the Apache Solr Reference Guide
>> for 6.2.
>>
>> The artifacts are available at: https://dist.apache.org/repos/
>> dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-6.2-RC1/.
>>
>> $ more apache-solr-ref-guide-6.2.pdf.sha1
>> b070de9fb7806795cd1d55f1dd15d0a5a374d0b2  apache-solr-ref-guide-6.2.pdf
>>
>> Here's my +1.
>>
>> Thanks,
>> Cassandra
>>
>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>


[jira] [Commented] (LUCENE-7440) Document skipping on large indexes is broken

2016-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15483925#comment-15483925
 ] 

ASF subversion and git services commented on LUCENE-7440:
-

Commit c7b3e9ae3695a13dacb81312db0d470ada273808 in lucene-solr's branch 
refs/heads/branch_6_2 from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c7b3e9a ]

LUCENE-7440: fix MultiLevelSkipListReader overflow

(cherry picked from commit cf72eeb)


> Document skipping on large indexes is broken
> 
>
> Key: LUCENE-7440
> URL: https://issues.apache.org/jira/browse/LUCENE-7440
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 2.2
>Reporter: Yonik Seeley
>Priority: Critical
> Fix For: master (7.0), 6.3, 6.2.1
>
> Attachments: LUCENE-7440.patch, LUCENE-7440.patch
>
>
> Large skips on large indexes fail.
> Anything that uses skips (such as a boolean query, filtered queries, faceted 
> queries, join queries, etc) can trigger this bug on a sufficiently large 
> index.
> The bug is a numeric overflow in MultiLevelSkipList that has been present 
> since inception (Lucene 2.2).  It may not manifest until one has a single 
> segment with more than ~1.8B documents, and a large skip is performed on that 
> segment.
> Typical stack trace on Lucene7-dev:
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 110
>   at 
> org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:297)
>   at org.apache.lucene.store.DataInput.readVInt(DataInput.java:125)
>   at 
> org.apache.lucene.codecs.lucene50.Lucene50SkipReader.readSkipData(Lucene50SkipReader.java:180)
>   at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:163)
>   at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:133)
>   at 
> org.apache.lucene.codecs.lucene50.Lucene50PostingsReader$BlockDocsEnum.advance(Lucene50PostingsReader.java:421)
>   at YCS_skip7$1.testSkip(YCS_skip7.java:307)
> {code}
> Typical stack trace on Lucene4.10.3:
> {code}
> 6-08-31 18:57:17,460 ERROR org.apache.solr.servlet.SolrDispatchFilter: 
> null:java.lang.ArrayIndexOutOfBoundsException: 75
>  at 
> org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:301)
>  at org.apache.lucene.store.DataInput.readVInt(DataInput.java:122)
>  at 
> org.apache.lucene.codecs.lucene41.Lucene41SkipReader.readSkipData(Lucene41SkipReader.java:194)
>  at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:168)
>  at 
> org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:138)
>  at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.advance(Lucene41PostingsReader.java:506)
>  at org.apache.lucene.search.TermScorer.advance(TermScorer.java:85)
> [...]
>  at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
> [...]
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2004)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9430) Locale in

2016-09-12 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-9430.
-
Resolution: Fixed

> Locale in  (language tag "en-US" or legacy name "en_US" does not work, English works)
> 
>
> Key: SOLR-9430
> URL: https://issues.apache.org/jira/browse/SOLR-9430
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler, update
>Affects Versions: 6.1
> Environment: all
>Reporter: Boris Steiner
>Assignee: Uwe Schindler
>Priority: Minor
>  Labels: DIH, SimpePropertiesWriter, locale, propertyWriter
> Fix For: 6.2.1, 6.3, 6.x, master (7.0)
>
> Attachments: SOLR-9430.patch, SOLR-9430.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> having a DIH with DB datasource and propertyWriter such as:
>   type="SimplePropertiesWriter" locale="en_US" />
> does not work with locale in form en_US as mentioned in documentation; 
> Locale is being looked up by Locale.getDisplayName() which returns human 
> readable representation as opposed to Locale.toLanguageTag, which returns 
> form such as en-US.
> Propertywirter with locale in this form works:
>   type="SimplePropertiesWriter" locale="Slovak" />
> Problematic line code:
> https://github.com/apache/lucene-solr/blob/branch_6_1/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/SimplePropertiesWriter.java#L95



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6744) fl renaming / alias of uniqueKey field generates null pointer exception in SolrCloud configuration

2016-09-12 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6744.
-
Resolution: Fixed

> fl renaming / alias of uniqueKey field generates null pointer exception in 
> SolrCloud configuration
> --
>
> Key: SOLR-6744
> URL: https://issues.apache.org/jira/browse/SOLR-6744
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.1
> Environment: Multiple replicas on SolrCloud config.  This specific 
> example with 4 shard, 3 replica per shard config.  This bug does NOT exist 
> when query is handled by single core.
>Reporter: Garth Grimm
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 6.2.1, 6.3, master (7.0)
>
> Attachments: SOLR-6744.patch, SOLR-6744.patch, SOLR-6744.patch
>
>
> If trying to rename the uniqueKey field using 'fl' in a distributed query 
> (ie: SolrCloud config), an NPE is thrown.
> The workarround is to redundently request the uniqueKey field, once with the 
> desired alias, and once with the original name
> Example...
> http://localhost:8983/solr/cloudcollection/select?q=*%3A*=xml=true=key:id
> Work around:
> http://localhost:8983/solr/cloudcollection/select?q=*%3A*=xml=true=key:id=id
> Error w/o work around...
> {code}
> 500 name="QTime">11*:* name="indent">truekey:id name="wt">xml name="trace">java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.QueryComponent.returnFields(QueryComponent.java:1257)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:720)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:695)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:324)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>   at org.eclipse.jetty.server.Server.handle(Server.java:368)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
>   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>   at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>   at java.lang.Thread.run(Thread.java:745)
> 500
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >