[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_102) - Build # 463 - Still Unstable!

2016-09-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/463/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 10 object(s) that were not released!!! 
[MockDirectoryWrapper, MDCAwareThreadPoolExecutor, TransactionLog, 
MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, 
MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 10 object(s) that were not 
released!!! [MockDirectoryWrapper, MDCAwareThreadPoolExecutor, TransactionLog, 
MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, 
MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
TransactionLog]
at __randomizedtesting.SeedInfo.seed([89CE62CE0521968B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_89CE62CE0521968B-001\tempDir-001\node1\testschemaapi_shard1_replica2\data\tlog\tlog.001:
 java.nio.file.FileSystemException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_89CE62CE0521968B-001\tempDir-001\node1\testschemaapi_shard1_replica2\data\tlog\tlog.001:
 The process cannot access the file because it is being used by another 
process. 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_89CE62CE0521968B-001\tempDir-001\node1\testschemaapi_shard1_replica2\data\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_89CE62CE0521968B-001\tempDir-001\node1\testschemaapi_shard1_replica2\data\tlog

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_89CE62CE0521968B-001\tempDir-001\node1\testschemaapi_shard1_replica2\data:
 java.nio.file.DirectoryNotEmptyException: 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 584 - Failure

2016-09-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/584/

No tests ran.

Build Log:
[...truncated 40561 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.02 sec (8.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 29.9 MB in 0.26 sec (114.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 64.3 MB in 0.08 sec (846.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 74.8 MB in 0.09 sec (870.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6012 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6012 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   6.2.1
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1438, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1382, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1420, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, gitRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 597, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
gitRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 743, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(version, unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1358, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/build.xml:559:
 exec returned: 1

Total time: 69 minutes 38 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (LUCENE-7438) UnifiedHighlighter

2016-09-19 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-7438:
-
Attachment: LUCENE_7438_UH_benchmark.patch

I developed a benchmark using Lucene's benchmark module; it's attached as a 
patch.  I made some changes to some existing classes there and it's debatable 
if those changes are readily committable.  The benchmark is on 200k documents 
from the wikipedia/enwiki data set.  While poking through the data and running 
some queries through Luke, I developed a few lists of queries: terms, phrases, 
and wildcards.  There are some boolean operators in there, and both phrase and 
wildcard query lists have some occasional TermQuery clauses intermixed too.  I 
had planned to add another query list but this takes awhile.  Due to the 
differences in index data, I have two similar .alg files, one for full term 
vectors, and the other for postings. I used the postings one to test analysis 
as well but it could have been on either.  It should be the same document data. 
 Since I have multiple query lists, I did a total of 6 benchmark executions, 
and each time tweaking the file.query.maker.file param and switching to the 
other .alg once.  In the table below, the first (search) row is the time it 
takes to search and retrieve the data to highlight but not to actually do any 
highlighting.  It's a baseline.  The other numbers are over and above that 
time.  In other words, I subtracted the output from the benchmark for the 
highlighter modes from the baseline so I could measure highlighting time.

I tested the standard Highlighter (SH), PostingsHighlighter (PH), 
FastVectorHighlighter (FVH), and UnifiedHighlighter (UH).  The suffix stands 
for the analysis mode: analysis (A), term vectors (V), postings (P), and 
postings with light term vectors (PV) -- a mode unique to the UH.  The code I 
wrote to test these, where possible, tried to configure them similarly.  

||Impl||terms||phrases||wildcards||
|(search)| 1.08 | 1.22 | 1.46 |
|SH_A   |3.92   |4.53   |9.33|
|UH_A   |1.91   |1.70   |3.93|
|SH_V   |1.83   |1.59   |3.93|
|FVH_V  |0.85   |1.36   |2.40|
|UH_V   |0.80   |1.00   |1.94|
|PH_P   |0.91   |0.57   |4.02|
|UH_P   |0.61   |0.36   |4.03|
|UH_PV  |0.52   |0.35   |1.76|

I ranked it by offset mode so you can see things working off the same offset 
source.  Judging from all the runs I did and as I tweaked what was being 
measured, there seems to be a large % err on these numbers, maybe 15%; I'm not 
sure.  Nevertheless the numbers above seem about right after I have done them a 
bunch of times and tweaked the benchmark.

Conclusions:  The UH is faster in each offset mode than the others.  It is a 
*lot* faster in Analysis mode than the standard Highlighter is.  In some runs 
I've also seen the FVH beat out the UH.  Note that months ago I ascertained 
that the FVH is not as sensitive to the performance of an underlying 
BreakIterator as UH & PH are -- so "cheap" BI's like the char separator one 
make for a UH that handily beats FVH but expensive BI's (like the default JDK 
provided) make these two more competitive.  

One cool observation that surprised me is the phrase query difference between 
PH & UH.  Despite the accuracy mode of UH (set to true for these benchmarks), 
it's still faster than PH.  I temporarily disabled it and re-ran and found that 
the UH _got slower_ when it treated them like PH does (bag of terms).  I 
believe that is because the filtering of these terms positions the UH does, 
while it intrinsically has some cost, seems to be cheaper than the main 
highlighting loop seeing more occurrences of terms that result in more Passages 
(which also needs to invoke the BreakIterator).  Accuracy & speed -- Cool!

Of course this benchmark could be improved... and it could be modified to 
measure highlighting shorter text or longer text.  And maybe try that case of 
an optimized index and lots of terms in the query.  Maybe benchmark queries 
with SpanMultiTermQuery in them, or ones with phrases & wildcards. And I was 
going to measure memory allocation but against this large matrix I changed my 
mind as I've got other things to get to.  I had done so months ago and the 
results looked great.

> UnifiedHighlighter
> --
>
> Key: LUCENE-7438
> URL: https://issues.apache.org/jira/browse/LUCENE-7438
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Affects Versions: 6.2
>Reporter: Timothy M. Rodriguez
>Assignee: David Smiley
> Attachments: LUCENE_7438_UH_benchmark.patch
>
>
> The UnifiedHighlighter is an evolution of the PostingsHighlighter that is 
> able to highlight using offsets in either postings, term vectors, or from 
> analysis (a TokenStream). Lucene’s existing highlighters are mostly 
> demarcated along offset source 

[jira] [Commented] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505494#comment-15505494
 ] 

Shawn Heisey commented on SOLR-8186:


Cool.  Thanks.  I hadn't actually looked at the script, I was just thinking out 
loud.

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8186.patch
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-09-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505491#comment-15505491
 ] 

ASF GitHub Bot commented on SOLR-9536:
--

GitHub user hgadre opened a pull request:

https://github.com/apache/lucene-solr/pull/81

[SOLR-9536] Initialize timestamp field with Optional.empty() to avoid an NPE



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hgadre/lucene-solr SOLR-9536_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/81.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #81


commit b3f52a7d4dd4823c2ba2e54ae75dc0f50533dcf8
Author: Hrishikesh Gadre 
Date:   2016-09-20T02:58:21Z

[SOLR-9536] Initialize timestamp field with Optional.empty() to avoid an NPE




> OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?
> ---
>
> Key: SOLR-9536
> URL: https://issues.apache.org/jira/browse/SOLR-9536
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Varun Thacker
>
> On IRC, a 6.2.0 user reported getting an NPE from 
> SnapShooter.deleteOldBackups L244, with the only other frame of the 
> stacktrace being {{lambda$createSnapAsync$1}} L196 (it was a screenshot, not 
> text easily cut/paste here)
> The problematic L244 is...
> {code}
>   if (obd.getTimestamp().isPresent()) {
> {code}
> ..and i believe the root of the issue is that while {{getTimestamp()}} is 
> declared to return an {{Optional}}, there is no guarantee that the 
> {{Optional}} instance is itself non-null...
> {code}
>private Optional timestamp;
>   public OldBackupDirectory(URI basePath, String dirName) {
> this.dirName = Preconditions.checkNotNull(dirName);
> this.basePath = Preconditions.checkNotNull(basePath);
> Matcher m = dirNamePattern.matcher(dirName);
> if (m.find()) {
>   try {
> this.timestamp = Optional.of(new 
> SimpleDateFormat(SnapShooter.DATE_FMT, Locale.ROOT).parse(m.group(1)));
>   } catch (ParseException e) {
> this.timestamp = Optional.empty();
>   }
> }
>   }
> {code}
> Allthough i'm not 100% certain, I believe the way the user was triggering 
> this bug was by configuring classic replication configured with something 
> like {{commit}} -- so that usage may be 
> neccessary to trigger the exception?
> Alternatively: perhaps this exception gets logged the *first* time anyone 
> tries to use any code that involves SnapShooter -- and after that a timestamp 
> file *is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #81: [SOLR-9536] Initialize timestamp field with Op...

2016-09-19 Thread hgadre
GitHub user hgadre opened a pull request:

https://github.com/apache/lucene-solr/pull/81

[SOLR-9536] Initialize timestamp field with Optional.empty() to avoid an NPE



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hgadre/lucene-solr SOLR-9536_fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/81.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #81


commit b3f52a7d4dd4823c2ba2e54ae75dc0f50533dcf8
Author: Hrishikesh Gadre 
Date:   2016-09-20T02:58:21Z

[SOLR-9536] Initialize timestamp field with Optional.empty() to avoid an NPE




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.2 - Build # 7 - Still Unstable

2016-09-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.2/7/

5 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testUpdateLogSynchronisation

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard2

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard2
at 
__randomizedtesting.SeedInfo.seed([96288BCA82A15EBA:6847D36940817DAB]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:794)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testUpdateLogSynchronisation(CdcrReplicationDistributedZkTest.java:377)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_102) - Build # 6126 - Unstable!

2016-09-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6126/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Expected to find shardAddress in the up shard info: 
{error=org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
available to handle this 
request,trace=org.apache.solr.client.solrj.SolrServerException: No live 
SolrServers available to handle this request  at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:392)
  at 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:226)
  at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:198)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745) ,time=1}

Stack Trace:
java.lang.AssertionError: Expected to find shardAddress in the up shard info: 
{error=org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
available to handle this 
request,trace=org.apache.solr.client.solrj.SolrServerException: No live 
SolrServers available to handle this request
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:392)
at 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:226)
at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:198)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
,time=1}
at 
__randomizedtesting.SeedInfo.seed([90F70C4512B37D14:18A3339FBC4F10EC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.TestDistributedSearch.comparePartialResponses(TestDistributedSearch.java:1172)
at 
org.apache.solr.TestDistributedSearch.queryPartialResults(TestDistributedSearch.java:1113)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:973)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1011)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[jira] [Commented] (SOLR-9512) CloudSolrClient's cluster state cache can break direct updates to leaders

2016-09-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505419#comment-15505419
 ] 

Noble Paul commented on SOLR-9512:
--

bq. How so?

You invalidate the cache if the first server did not serve the request. That's 
a problem. When the next request comes , it gets the fresh state which is 
exactly the same as the entry that was just invalidated because the new leader 
is not elected yet and the state. json is not yet updated in ZK.  As we 
discussed before, the cache must be invalidated when a server says the version 
is stale

> CloudSolrClient's cluster state cache can break direct updates to leaders
> -
>
> Key: SOLR-9512
> URL: https://issues.apache.org/jira/browse/SOLR-9512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9512.patch
>
>
> This is the root cause of SOLR-9305 and (at least some of) SOLR-9390.  The 
> process goes something like this:
> Documents are added to the cluster via a CloudSolrClient, with 
> directUpdatesToLeadersOnly set to true.  CSC caches its view of the 
> DocCollection.  The leader then goes down, and is reassigned.  Next time 
> documents are added, CSC checks its cache again, and gets the old view of the 
> DocCollection.  It then tries to send the update directly to the old, now 
> down, leader, and we get ConnectionRefused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-09-19 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505410#comment-15505410
 ] 

Hrishikesh Gadre edited comment on SOLR-9536 at 9/20/16 3:01 AM:
-

[~hossman] Yes this is correct. We initialize the timestamp field during the 
construction. I have prepared a patch and running the unit tests currently. 
Will submit the patch in next couple of hours.


was (Author: hgadre):
[~ hossman] Yes this is correct. We initialize the timestamp field during the 
construction. I have prepared a patch and running the unit tests currently. 
Will submit the patch in next couple of hours.

> OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?
> ---
>
> Key: SOLR-9536
> URL: https://issues.apache.org/jira/browse/SOLR-9536
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Varun Thacker
>
> On IRC, a 6.2.0 user reported getting an NPE from 
> SnapShooter.deleteOldBackups L244, with the only other frame of the 
> stacktrace being {{lambda$createSnapAsync$1}} L196 (it was a screenshot, not 
> text easily cut/paste here)
> The problematic L244 is...
> {code}
>   if (obd.getTimestamp().isPresent()) {
> {code}
> ..and i believe the root of the issue is that while {{getTimestamp()}} is 
> declared to return an {{Optional}}, there is no guarantee that the 
> {{Optional}} instance is itself non-null...
> {code}
>private Optional timestamp;
>   public OldBackupDirectory(URI basePath, String dirName) {
> this.dirName = Preconditions.checkNotNull(dirName);
> this.basePath = Preconditions.checkNotNull(basePath);
> Matcher m = dirNamePattern.matcher(dirName);
> if (m.find()) {
>   try {
> this.timestamp = Optional.of(new 
> SimpleDateFormat(SnapShooter.DATE_FMT, Locale.ROOT).parse(m.group(1)));
>   } catch (ParseException e) {
> this.timestamp = Optional.empty();
>   }
> }
>   }
> {code}
> Allthough i'm not 100% certain, I believe the way the user was triggering 
> this bug was by configuring classic replication configured with something 
> like {{commit}} -- so that usage may be 
> neccessary to trigger the exception?
> Alternatively: perhaps this exception gets logged the *first* time anyone 
> tries to use any code that involves SnapShooter -- and after that a timestamp 
> file *is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-09-19 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505410#comment-15505410
 ] 

Hrishikesh Gadre edited comment on SOLR-9536 at 9/20/16 3:01 AM:
-

[~hossman] Yes this is correct. We need to initialize the timestamp field 
during the construction. I have prepared a patch and running the unit tests 
currently. Will submit the patch in next couple of hours.


was (Author: hgadre):
[~hossman] Yes this is correct. We initialize the timestamp field during the 
construction. I have prepared a patch and running the unit tests currently. 
Will submit the patch in next couple of hours.

> OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?
> ---
>
> Key: SOLR-9536
> URL: https://issues.apache.org/jira/browse/SOLR-9536
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Varun Thacker
>
> On IRC, a 6.2.0 user reported getting an NPE from 
> SnapShooter.deleteOldBackups L244, with the only other frame of the 
> stacktrace being {{lambda$createSnapAsync$1}} L196 (it was a screenshot, not 
> text easily cut/paste here)
> The problematic L244 is...
> {code}
>   if (obd.getTimestamp().isPresent()) {
> {code}
> ..and i believe the root of the issue is that while {{getTimestamp()}} is 
> declared to return an {{Optional}}, there is no guarantee that the 
> {{Optional}} instance is itself non-null...
> {code}
>private Optional timestamp;
>   public OldBackupDirectory(URI basePath, String dirName) {
> this.dirName = Preconditions.checkNotNull(dirName);
> this.basePath = Preconditions.checkNotNull(basePath);
> Matcher m = dirNamePattern.matcher(dirName);
> if (m.find()) {
>   try {
> this.timestamp = Optional.of(new 
> SimpleDateFormat(SnapShooter.DATE_FMT, Locale.ROOT).parse(m.group(1)));
>   } catch (ParseException e) {
> this.timestamp = Optional.empty();
>   }
> }
>   }
> {code}
> Allthough i'm not 100% certain, I believe the way the user was triggering 
> this bug was by configuring classic replication configured with something 
> like {{commit}} -- so that usage may be 
> neccessary to trigger the exception?
> Alternatively: perhaps this exception gets logged the *first* time anyone 
> tries to use any code that involves SnapShooter -- and after that a timestamp 
> file *is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-09-19 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505410#comment-15505410
 ] 

Hrishikesh Gadre edited comment on SOLR-9536 at 9/20/16 3:01 AM:
-

[~ hossman] Yes this is correct. We initialize the timestamp field during the 
construction. I have prepared a patch and running the unit tests currently. 
Will submit the patch in next couple of hours.


was (Author: hgadre):
[~hoss...@fucit.org] Yes this is correct. We initialize the timestamp field 
during the construction. I have prepared a patch and running the unit tests 
currently. Will submit the patch in next couple of hours.

> OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?
> ---
>
> Key: SOLR-9536
> URL: https://issues.apache.org/jira/browse/SOLR-9536
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Varun Thacker
>
> On IRC, a 6.2.0 user reported getting an NPE from 
> SnapShooter.deleteOldBackups L244, with the only other frame of the 
> stacktrace being {{lambda$createSnapAsync$1}} L196 (it was a screenshot, not 
> text easily cut/paste here)
> The problematic L244 is...
> {code}
>   if (obd.getTimestamp().isPresent()) {
> {code}
> ..and i believe the root of the issue is that while {{getTimestamp()}} is 
> declared to return an {{Optional}}, there is no guarantee that the 
> {{Optional}} instance is itself non-null...
> {code}
>private Optional timestamp;
>   public OldBackupDirectory(URI basePath, String dirName) {
> this.dirName = Preconditions.checkNotNull(dirName);
> this.basePath = Preconditions.checkNotNull(basePath);
> Matcher m = dirNamePattern.matcher(dirName);
> if (m.find()) {
>   try {
> this.timestamp = Optional.of(new 
> SimpleDateFormat(SnapShooter.DATE_FMT, Locale.ROOT).parse(m.group(1)));
>   } catch (ParseException e) {
> this.timestamp = Optional.empty();
>   }
> }
>   }
> {code}
> Allthough i'm not 100% certain, I believe the way the user was triggering 
> this bug was by configuring classic replication configured with something 
> like {{commit}} -- so that usage may be 
> neccessary to trigger the exception?
> Alternatively: perhaps this exception gets logged the *first* time anyone 
> tries to use any code that involves SnapShooter -- and after that a timestamp 
> file *is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-09-19 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505410#comment-15505410
 ] 

Hrishikesh Gadre commented on SOLR-9536:


[~hoss...@fucit.org] Yes this is correct. We initialize the timestamp field 
during the construction. I have prepared a patch and running the unit tests 
currently. Will submit the patch in next couple of hours.

> OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?
> ---
>
> Key: SOLR-9536
> URL: https://issues.apache.org/jira/browse/SOLR-9536
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Varun Thacker
>
> On IRC, a 6.2.0 user reported getting an NPE from 
> SnapShooter.deleteOldBackups L244, with the only other frame of the 
> stacktrace being {{lambda$createSnapAsync$1}} L196 (it was a screenshot, not 
> text easily cut/paste here)
> The problematic L244 is...
> {code}
>   if (obd.getTimestamp().isPresent()) {
> {code}
> ..and i believe the root of the issue is that while {{getTimestamp()}} is 
> declared to return an {{Optional}}, there is no guarantee that the 
> {{Optional}} instance is itself non-null...
> {code}
>private Optional timestamp;
>   public OldBackupDirectory(URI basePath, String dirName) {
> this.dirName = Preconditions.checkNotNull(dirName);
> this.basePath = Preconditions.checkNotNull(basePath);
> Matcher m = dirNamePattern.matcher(dirName);
> if (m.find()) {
>   try {
> this.timestamp = Optional.of(new 
> SimpleDateFormat(SnapShooter.DATE_FMT, Locale.ROOT).parse(m.group(1)));
>   } catch (ParseException e) {
> this.timestamp = Optional.empty();
>   }
> }
>   }
> {code}
> Allthough i'm not 100% certain, I believe the way the user was triggering 
> this bug was by configuring classic replication configured with something 
> like {{commit}} -- so that usage may be 
> neccessary to trigger the exception?
> Alternatively: perhaps this exception gets logged the *first* time anyone 
> tries to use any code that involves SnapShooter -- and after that a timestamp 
> file *is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 1756 - Unstable!

2016-09-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1756/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Doc with id=2 not found in 
http://127.0.0.1:37590/_vb/c8n_1x2_leader_session_loss due to: Path not found: 
/id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=2 not found in 
http://127.0.0.1:37590/_vb/c8n_1x2_leader_session_loss due to: Path not found: 
/id; rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([4B64AE1EC6C8F995:C33091C46834946D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
at 
org.apache.solr.cloud.HttpPartitionTest.testLeaderZkSessionLoss(HttpPartitionTest.java:506)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:119)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Created] (SOLR-9537) Scoring facets with scoreNodes expression

2016-09-19 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-9537:


 Summary: Scoring facets with scoreNodes expression
 Key: SOLR-9537
 URL: https://issues.apache.org/jira/browse/SOLR-9537
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


SOLR-9193 introduced the scoreNodes expression to find the most interesting 
relationships in a distributed graph.

With a small adjustment scoreNodes can be made to easily wrap the facet() 
expression, to find the most interesting facets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9528) Make _docid_ (lucene id) a pseudo field

2016-09-19 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505254#comment-15505254
 ] 

Yonik Seeley commented on SOLR-9528:


docid() as a value source is a good idea... as you say, makes it clearer that 
it's a computed value.
That works for many contexts, except perhaps Alex's original usecase: fetch me 
the document that corresponds to a specific \_docid\_ .
One could use something like frange(l=123456789,u=123456789)docid() but that's 
pretty clunky

> Make _docid_ (lucene id) a pseudo field
> ---
>
> Key: SOLR-9528
> URL: https://issues.apache.org/jira/browse/SOLR-9528
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>
> Lucene document id is a transitory id that cannot be relied on as it can 
> change on document updates, etc.
> However, there are circumstances where it could be useful to use it in a 
> search. The primarily use is a debugging where some error messages provide 
> only lucene document id as the reference. For example:
> {noformat}
> child query must only match non-parent docs, but parent docID=38200 matched 
> childScorer=class org.apache.lucene.search.DisjunctionSumScorer
> {noformat}
> We already expose the lucene id with \[docid] transformer with \_docid_ 
> sorting.
> On the email list, [~yo...@apache.org] proposed that _docid_ should be a 
> legitimate pseudo-field, which would make it returnable, usable in function 
> queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-09-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9258:
-
Attachment: SOLR-9258.patch

New patch which adds the ModelCache, ModelStream and breaks out the 
ClassifyStream into it's own class. The test case has been adjusted slightly to 
accommodate the new classes.

The core algorithms though are the same as the original patch

I haven't actually run this code yet so this is just for review.




> Optimizing, storing and deploying AI models with Streaming Expressions
> --
>
> Key: SOLR-9258
> URL: https://issues.apache.org/jira/browse/SOLR-9258
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: ModelCache.java, ModelCache.java, SOLR-9258.patch, 
> SOLR-9258.patch
>
>
> This ticket describes a framework for *optimizing*, *storing* and *deploying* 
> AI models within the Streaming Expression framework.
> *Optimizing*
> [~caomanhdat], has contributed SOLR-9252 which provides *Streaming 
> Expressions* for both feature selection and optimization of a logistic 
> regression text classifier. SOLR-9252 also provides a great working example 
> of *optimization* of a machine learning model using an in-place parallel 
> iterative algorithm.
> *Storing*
> Both features and optimized models can be stored in SolrCloud collections 
> using the update expression. Using [~caomanhdat]'s example in SOLR-9252, the 
> pseudo code for storing features would be:
> {code}
> update(featuresCollection, 
>featuresSelection(collection1, 
> id="myFeatures", 
> q="*:*",  
> field="tv_text", 
> outcome="out_i", 
> positiveLabel=1, 
> numTerms=100))
> {code}  
> The id field can be added to the featureSelection expression so that features 
> can be later retrieved from the collection it's stored in.
> *Deploying*
> With the introduction of the topic() expression, SolrCloud can be treated as 
> a distributed message queue. This messaging capability can  be used to deploy 
> models and process data through the models.
> To implement this approach a classify() function can be created that uses a 
> topic() function to return both the model and the data to be classified:
> The pseudo code looks like this:
> {code}
> classify(topic(models, q="modelID", fl="features, weights"),
>  topic(emails, q="*:*", fl="id, body", rows="500", version="3232323"))
> {code}
> In the example above the classify() function uses the topic() function to 
> retrieve the model. Each time there is an update to the model in the index, 
> the topic() expression will automatically read the new model.
> The topic function() is also used to pull in the data set that is being 
> classified. Notice the *version* parameter. This will be added to the topic 
> function to support pulling results from a specific version number (jira 
> ticket to follow).
> With this approach both the model and the data to process through the model 
> are treated as messages in a message queue.
> The daemon function can be used to send the classify function to Solr where 
> it will be run in the background. The pseudo code looks like this:
> {code}
> daemon(...,
>  update(classifiedEmails, 
>  classify(topic(models, q="modelID", fl="features, weights"),
>   topic(emails, q="*:*", fl="id, fl, body", 
> rows="500", version="3232323"
> {code}
> In this scenario the daemon will run the classify function repeatedly in the 
> background. With each run the topic() functions will re-pull the model if the 
> model has been updated. It will also pull a new set of emails to be 
> classified. The classified emails can be stored in another SolrCloud 
> collection using the update() function.
> Using this approach emails can be classified in batches. The daemon can 
> continue to run even after all all the emails have been classified. New 
> emails added to the emails collections will then be automatically classified 
> when they enter the index.
> Classification can be done in parallel once SOLR-9240 is completed. This will 
> allow topic() results to be partitioned across worker nodes so they can be 
> processed in parallel. The pseudo code for this is:
> {code}
> parallel(workerCollection, worker="20", ...,
>  daemon(...,
>update(classifiedEmails, 
>classify(topic(models, q="modelID", fl="features, 
> weights", partitionKeys="none"),
> 

[jira] [Commented] (SOLR-9528) Make _docid_ (lucene id) a pseudo field

2016-09-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505214#comment-15505214
 ] 

Hoss Man commented on SOLR-9528:


bq. So I suppose this means a Won't-Fix for this issue, ...

Assuming my vague guess at what's being suggested here is accurate, then yeah 
-- that would be my vote.  But i'm still not certain i actaully understand the 
objective

If my guess _was_ correct, then we could also just change the title of this 
jira and use it to track creating a patch that adds {{docid()}} as a 
ValueSource, and only once it exists update the ref guide to suggest it in any 
place where {{\_docid\_}} is currently suggested

> Make _docid_ (lucene id) a pseudo field
> ---
>
> Key: SOLR-9528
> URL: https://issues.apache.org/jira/browse/SOLR-9528
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>
> Lucene document id is a transitory id that cannot be relied on as it can 
> change on document updates, etc.
> However, there are circumstances where it could be useful to use it in a 
> search. The primarily use is a debugging where some error messages provide 
> only lucene document id as the reference. For example:
> {noformat}
> child query must only match non-parent docs, but parent docID=38200 matched 
> childScorer=class org.apache.lucene.search.DisjunctionSumScorer
> {noformat}
> We already expose the lucene id with \[docid] transformer with \_docid_ 
> sorting.
> On the email list, [~yo...@apache.org] proposed that _docid_ should be a 
> legitimate pseudo-field, which would make it returnable, usable in function 
> queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9528) Make _docid_ (lucene id) a pseudo field

2016-09-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505203#comment-15505203
 ] 

David Smiley commented on SOLR-9528:


bq. Special syntax like _docid_ in the sort param made sense in the early days 
of Solr, but feel hackish now that we have first order functions (which are 
clearly a "computed" value, with no ambiguity that it might be stored)

+1 to what you say Hoss. We've got ValueSourceParsers & DocumentTransformers 
now for this sorta thing.

So I suppose this means a Won't-Fix for this issue, and might mean other new 
issues, and possibly a removal of "\_docid\_" in the Ref guide (being 
deprecated, yet still works in some situations (I know it doesn't always work)).

> Make _docid_ (lucene id) a pseudo field
> ---
>
> Key: SOLR-9528
> URL: https://issues.apache.org/jira/browse/SOLR-9528
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>
> Lucene document id is a transitory id that cannot be relied on as it can 
> change on document updates, etc.
> However, there are circumstances where it could be useful to use it in a 
> search. The primarily use is a debugging where some error messages provide 
> only lucene document id as the reference. For example:
> {noformat}
> child query must only match non-parent docs, but parent docID=38200 matched 
> childScorer=class org.apache.lucene.search.DisjunctionSumScorer
> {noformat}
> We already expose the lucene id with \[docid] transformer with \_docid_ 
> sorting.
> On the email list, [~yo...@apache.org] proposed that _docid_ should be a 
> legitimate pseudo-field, which would make it returnable, usable in function 
> queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9528) Make _docid_ (lucene id) a pseudo field

2016-09-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505121#comment-15505121
 ] 

Hoss Man commented on SOLR-9528:


I don't understand, practially/actionable, what this sentence means...

{noformat}
...proposed that _docid_ should be a legitimate pseudo-field, which would 
make it returnable, usable in function queries, etc.
{noformat}

* How is (anyone) defining "legitimate pseudo-field" in this context?
* There's not enough context to understand what is implied by the "etc." in 
this sentence -- what are some concrete examples of what users would be able to 
do in the future that they can't do now
** alternatively: what are some examples of existing vs new _syntax_ that is 
being proposed (either in configs or requests) for functionality that is 
already supported?



If the crux of this idea here is simply that the string {{\_docid\_}} should be 
usable anywhere that a fieldname can be used even when it's not defined in the 
schema, then that seems like a particularly bad/inconsistent idea to me since 
all of the other magic {{\_underscore\_}} fields that exist in solr *are* 
definied in the schema, and it's actually important how/if they are stored, 
docValues, etc...

I've never been a huge fan of *any* magic field names in solr, and I 
*personally* would be confused as hell if we started doing encouraging users to 
use magic field names that look like real field names but don't actually exist 
-- especailly because i would never be sure when a user is asking a question if 
they actually added {{\_docid\_}} to their schema -- a situation i have 
actaully encountered in real live and was then *VERY* confused by the described 
behavior of {{sort=\_docid\_ asc}}.  

My straw man proposal would be to (informally/formally) deprecate using 
{{\_docid\_}} in the sort param, and insitead offer a {{docid()}} (or 
{{docnum()}}, whatever folks prefer) ValueSourceParser out of the box, that 
people could pass to other functions (for the purpose of filtering, sorting, 
whatever...), or request in the response via {{fl}} etc...   

Special syntax like {{\_docid\_}} in the sort param made sense in the early 
days of Solr, but feel hackish now that we have first order functions (which 
are clearly a "computed" value, with no ambiguity that it might be stored)

(for that matter, i would argue we should do the same thing with "{{score}}" => 
{{score()}}, and add a {{random(seed)}} to replace the way users currently have 
to configure solr.RandomField ... but i'll save those fights for different 
jiras)

> Make _docid_ (lucene id) a pseudo field
> ---
>
> Key: SOLR-9528
> URL: https://issues.apache.org/jira/browse/SOLR-9528
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>
> Lucene document id is a transitory id that cannot be relied on as it can 
> change on document updates, etc.
> However, there are circumstances where it could be useful to use it in a 
> search. The primarily use is a debugging where some error messages provide 
> only lucene document id as the reference. For example:
> {noformat}
> child query must only match non-parent docs, but parent docID=38200 matched 
> childScorer=class org.apache.lucene.search.DisjunctionSumScorer
> {noformat}
> We already expose the lucene id with \[docid] transformer with \_docid_ 
> sorting.
> On the email list, [~yo...@apache.org] proposed that _docid_ should be a 
> legitimate pseudo-field, which would make it returnable, usable in function 
> queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 854 - Still unstable!

2016-09-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/854/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([63B9C945E36535DD:B06FC6F33FF2731]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:137)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:282)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505053#comment-15505053
 ] 

Jan Høydahl commented on SOLR-9534:
---

The {{-q}} flag logs with WARN level. With this patch, it will produce this for 
{{bin/solr start -f -q}}

{noformat}
Starting Solr on port 8983 from /Users/janhoy/git/lucene-solr/solr/server

0INFO  (main) [   ] o.e.j.s.Server jetty-9.3.8.v20160314
198  WARN  (main) [   ] o.e.j.s.SecurityHandler 
ServletContext@o.e.j.w.WebAppContext@6536e911{/solr,file:///Users/janhoy/git/lucene-solr/solr/server/solr-webapp/webapp/,STARTING}{/Users/janhoy/git/lucene-solr/solr/server/solr-webapp/webapp}
 has uncovered http methods for path: /
205  INFO  (main) [   ] o.a.s.s.SolrDispatchFilter Log level override, property 
solr.log.level=WARN
298  WARN  (main) [   ] o.a.s.c.CoreContainer Couldn't add files from 
/Users/janhoy/git/lucene-solr/solr/server/solr/lib to classpath: 
/Users/janhoy/git/lucene-solr/solr/server/solr/lib
553  INFO  (main) [   ] o.e.j.s.h.ContextHandler Started 
o.e.j.w.WebAppContext@6536e911{/solr,file:///Users/janhoy/git/lucene-solr/solr/server/solr-webapp/webapp/,AVAILABLE}{/Users/janhoy/git/lucene-solr/solr/server/solr-webapp/webapp}
566  INFO  (main) [   ] o.e.j.s.ServerConnector Started 
ServerConnector@57250572{HTTP/1.1,[http/1.1]}{0.0.0.0:8983}
567  INFO  (main) [   ] o.e.j.s.Server Started @986ms
{noformat}

PS: The two WARN logs are removed over in SOLR-8186, as well as printing 
date in console log.

A problem with just switching to WARN is that we pretty much mute everything 
Solr has to say :) The easiest solution is to explicitly set log level to INFO 
for a few selected Solr classes which we still want to see. Or long term 
implement the ideas from SOLR-4132 

> Support quiet/verbose bin/solr options for changing log level
> -
>
> Key: SOLR-9534
> URL: https://issues.apache.org/jira/browse/SOLR-9534
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9534.patch
>
>
> Spinoff from SOLR-6677
> Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
> -V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.
> These would simply be convenience options for changing the RootLogger from 
> level INFO to DEBUG or WARN respectively. This can be done programmatically 
> in log4j at startup. 
> Could be we need to add some more package specific defaults in 
> log4j.properties to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9534:
--
Labels: logging  (was: )

> Support quiet/verbose bin/solr options for changing log level
> -
>
> Key: SOLR-9534
> URL: https://issues.apache.org/jira/browse/SOLR-9534
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9534.patch
>
>
> Spinoff from SOLR-6677
> Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
> -V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.
> These would simply be convenience options for changing the RootLogger from 
> level INFO to DEBUG or WARN respectively. This can be done programmatically 
> in log4j at startup. 
> Could be we need to add some more package specific defaults in 
> log4j.properties to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9534:
--
Attachment: SOLR-9534.patch

Attaching patch:

* We support the new environment variable {{SOLR_LOG_LEVEL}} which may be set 
either in {{solr.in.cmd|sh}} or in the shell. If the start scripts finds a 
value for {{SOLR_LOG_LEVEL}} it will pass it on to Solr as option 
{{-Dsolr.log.level=$SOLR_LOG_LEVEL}}
* In {{SolrDispatchFilter.init()}} we check the option, and change the 
rootLogger level programmatically
* Adds two new arguments to {{bin/solr\[.cmd\]}}: {{-v}} and {{-q}} which sets 
{{SOLR_LOG_LEVEL}} to {{DEBUG}} and {{WARN}} respectively. These will override 
whatever set in the environment.

So now you don't need to understand Log4J or dig to find the correct log config 
to make Solr verbose or quiet.

Only Linux script tested. Would be grateful if someone would take {{solr.cmd}} 
for a ride. I do not validate the env.var, but a single-word invalid level will 
simply select the default which is INFO.


> Support quiet/verbose bin/solr options for changing log level
> -
>
> Key: SOLR-9534
> URL: https://issues.apache.org/jira/browse/SOLR-9534
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9534.patch
>
>
> Spinoff from SOLR-6677
> Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
> -V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.
> These would simply be convenience options for changing the RootLogger from 
> level INFO to DEBUG or WARN respectively. This can be done programmatically 
> in log4j at startup. 
> Could be we need to add some more package specific defaults in 
> log4j.properties to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9534:
--
Fix Version/s: master (7.0)
   6.3

> Support quiet/verbose bin/solr options for changing log level
> -
>
> Key: SOLR-9534
> URL: https://issues.apache.org/jira/browse/SOLR-9534
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9534.patch
>
>
> Spinoff from SOLR-6677
> Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
> -V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.
> These would simply be convenience options for changing the RootLogger from 
> level INFO to DEBUG or WARN respectively. This can be done programmatically 
> in log4j at startup. 
> Could be we need to add some more package specific defaults in 
> log4j.properties to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-9534:
-

Assignee: Jan Høydahl

> Support quiet/verbose bin/solr options for changing log level
> -
>
> Key: SOLR-9534
> URL: https://issues.apache.org/jira/browse/SOLR-9534
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9534.patch
>
>
> Spinoff from SOLR-6677
> Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
> -V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.
> These would simply be convenience options for changing the RootLogger from 
> level INFO to DEBUG or WARN respectively. This can be done programmatically 
> in log4j at startup. 
> Could be we need to add some more package specific defaults in 
> log4j.properties to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504976#comment-15504976
 ] 

Jan Høydahl commented on SOLR-8186:
---

When in foreground mode, the CONSOLE logging goes to the console - 
{{logs/solr-8983-console.log}} is NOT written. Try it :)

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8186.patch
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504965#comment-15504965
 ] 

Shawn Heisey edited comment on SOLR-8186 at 9/19/16 11:06 PM:
--

I like it!  The console logfile is a persistent thorn when disk space is 
limited.

I understand the desire to log to the actual console in foreground mode, but 
I'm not sure that we want to copy that output to a file in foreground mode, 
especially if we are still creating solr.log.  I think logging to solr.log even 
in foreground mode is a good idea.


was (Author: elyograg):
I like it!  The console logfile is a persistent thorn when disk space is 
limited.

I understand the desire to log to the actual console in foreground mode, but 
I'm not sure that we want to copy that output to a file, especially if we are 
still creating solr.log.  I think logging to solr.log even in foreground mode 
is a good idea.

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8186.patch
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504965#comment-15504965
 ] 

Shawn Heisey commented on SOLR-8186:


I like it!  The console logfile is a persistent thorn when disk space is 
limited.

I understand the desire to log to the actual console in foreground mode, but 
I'm not sure that we want to copy that output to a file, especially if we are 
still creating solr.log.  I think logging to solr.log even in foreground mode 
is a good idea.

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8186.patch
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504941#comment-15504941
 ] 

Jan Høydahl commented on SOLR-8186:
---

The only output in {{solr-8983-console.log}} when starting in backgorund is now 
these two lines:
{noformat}
2016-09-19 22:40:45.836 INFO  (main) [   ] o.e.j.s.Server jetty-9.3.8.v20160314
2016-09-19 22:40:46.120 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter Property 
solr.log.muteconsole=true. Muting log appender named "CONSOLE".
{noformat}

Please give it a spin and report your findings. I have not yet tested the 
solr.cmd changes, anyone on Windows who wants to test?

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8186.patch
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8186:
--
Attachment: SOLR-8186.patch

Attached patch with the following:

* New option {{-Dsolr.log.muteconsole}} which is passed when starting in 
foreground mode ({{-f}}). This will programatically disable the {{CONSOLE}} 
logger (in SolrDispatchFilter.init), causing the {{solr-8983-console.log}} to 
only contain stdout/stderr logs (except for the first few lines before the 
logger is disabled).
* Removed some excess Jetty logging by setting default level for 
{{org.eclipse.jetty=WARN}} and {{org.eclipse.jetty.server=INFO}}
* Removed annoying log line {{o.e.j.s.SecurityHandler ... has uncovered http 
methods for path: /}} by extending web.xml
* Removed annoying log line {{o.a.s.c.CoreContainer Couldn't add files from 
/opt/solr/server/solr/lib to classpath:}} when libPath is the hardcoded {{lib}}
* Now printing full date also for CONSOLE log

I decided to do the dynamic disabling of CONSOLE logger instead of having 
multiple {{log4j.properties}} files floating around, meaning that the muting 
will work also for custom logger configs, as long as the console logger is 
named {{CONSOLE}}. This is more flexible.

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8186.patch
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8186:
--
Labels: logging  (was: )

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-8186:
-

Assignee: Jan Høydahl

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8186:
--
Fix Version/s: master (7.0)
   6.3

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+136) - Build # 17849 - Failure!

2016-09-19 Thread Uwe Schindler
Thanks Rory,

 

the build was already fixed a minute ago. FYI, the issue Mandy posted is not 
public! The main problem was, that it was not clear from the release notes that 
you changed this, so I missed this change, especially as the JBS issue is not 
public!

 

Uwe

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

  http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Rory O'Donnell [mailto:rory.odonn...@oracle.com] 
Sent: Monday, September 19, 2016 7:15 PM
To: Uwe Schindler 
Cc: dev@lucene.apache.org; rory.odonn...@oracle.com
Subject: Re: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+136) - Build 
# 17849 - Failure!

 

Hi Uwe,

Mandy replied see below.

Rgds, Rory

 

The -release option was removed in jdk-9+135 [1]:
   https://bugs.openjdk.java.net/browse/JDK-8160851
 
Mandy
[1] http://hg.openjdk.java.net/jdk9/dev/langtools/rev/047d4d42b466
 

 

On 19/09/2016 15:15, Uwe Schindler wrote:

I received this:
 

Hi Uwe,
Most options of javac now use the GNU style,
so it's --release instead of -release.
 
cheers,
Rémi

 
It looks like options that were added recently to Java 9's command line tools 
now use the more standard GNU style options (double slash). I will update 
build.xml later (reopen issue about "-release" switch), once I tested 
everything. It looks like this only affects new options, I just hope that other 
command line options like "-classpath" did not change in Java 9 (or they have 
some backwards layer implemented)!
 
Uwe
 
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de  
 

-Original Message-
From: Uwe Schindler [mailto:u...@thetaphi.de]
Sent: Monday, September 19, 2016 3:17 PM
To: dev@lucene.apache.org  
Cc: rory.odonn...@oracle.com  
Subject: RE: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+136) -
Build # 17849 - Failure!
 
Hi,
 
I contacted the compiler group and Rory O'Donnell about this. Looks strange,
maybe option parsing broke in build 136.
 
Uwe
 
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de  
 

-Original Message-
From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
Sent: Monday, September 19, 2016 11:31 AM
To: dev@lucene.apache.org  
Subject: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+136) - Build

#

17849 - Failure!
Importance: Low
 
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17849/
Java: 64bit/jdk-9-ea+136 -XX:-UseCompressedOops -XX:+UseG1GC
 
No tests ran.
 
Build Log:
[...truncated 81 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:763: The
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:707: The
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:59: The
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build.xml:50:
The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-
build.xml:501: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-
build.xml:1955: Compile failed; see the compiler error output for details.
 
Total time: 1 second
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files

were

found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
 

 
 
 
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
 
For additional commands, e-mail: dev-h...@lucene.apache.org 
 

 
 
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
 
For additional commands, e-mail: dev-h...@lucene.apache.org 
 
 





-- 
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland 


[jira] [Updated] (LUCENE-7292) Change build system to use "--release 8" instead of "-source/-target" when invoking javac

2016-09-19 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-7292:
--
Summary: Change build system to use "--release 8" instead of 
"-source/-target" when invoking javac  (was: Change build system to use 
"-release 8" instead of "-source/-target" when invoking javac)

> Change build system to use "--release 8" instead of "-source/-target" when 
> invoking javac
> -
>
> Key: LUCENE-7292
> URL: https://issues.apache.org/jira/browse/LUCENE-7292
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Affects Versions: 6.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: 6.1, 6.x, master (7.0), 6.3
>
> Attachments: LUCENE-7292.patch, LUCENE-7292.patch, LUCENE-7292.patch, 
> LUCENE-7292.patch
>
>
> Currently we pass {{-source 1.8 -target 1.8}} to javac and javadoc when 
> compiling our source code. We all know that this brings problems, because 
> cross-compiling does not really work. We create class files that are able to 
> run on Java 8, but when it is compiled with java 9, it is not sure that some 
> code may use Java 9 APIs that are not available in Java 8. Javac prints a 
> warning about this (it complains about the bootclasspath not pointing to JDK 
> 8 when used with source/target 1.8).
> Java 8 is the last version of Java that has this trap. From Java 9 on, 
> instead of passing source and target, the recommended way is to pass a single 
> {{-release 8}} parameter to javac (see http://openjdk.java.net/jeps/247). 
> This solves the bootsclasspath problem, because it has all the previous java 
> versions as "signatures" (like forbiddenapis), including deprecated APIs,... 
> everything included. You can find this in the {{$JAVA_HOME/lib/ct.sym}} file 
> (which is a ZIP file, so you can open it with a ZIP tool of your choice). In 
> Java 9+, this file also contains all old APIs from Java 6+.
> When invoking the compiler with {{-release 8}}, there is no risk of 
> accidentally using API from newer versions.
> The migration here is quite simple: As we require Java 8 already, there is 
> (theoretically) no need to pass source and target anymore. It is enough to 
> just pass {{-release 8}} if we detect Java 9 as compiling JVM. Nevertheless I 
> plan to do the following:
> - remove properties {{javac.source}} and {{javac.target}} from Ant build
> - add {{javac.release}} property and define it to be "8" (not "1.8", this is 
> new version styling that also works with Java 8+ already)
> - remove attributes in the {{}} calls
> - add a new Ant property {{javac.release.args}} that is dynamically evaluated 
> inside our compile macro: On Java 9 it evaluates to {{-release 
> $\{javac.release\}}}, for java 8 it uses {{-source $\{javac.release\} -target 
> $\{javac.release\}}} for backwards compatibility
> - pass this new arg to javac as {{}}
> By this we could theoretically remove the check from smoketester about the 
> compiling JDK (the MANIFEST check), because although compiled with Java 9, 
> the class files were actually compiled against the old Java API from ct.sym 
> file.
> I will also align the warnings to reenable {{-Xlint:options}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7292) Change build system to use "-release 8" instead of "-source/-target" when invoking javac

2016-09-19 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-7292.
---
   Resolution: Fixed
Fix Version/s: 6.3

> Change build system to use "-release 8" instead of "-source/-target" when 
> invoking javac
> 
>
> Key: LUCENE-7292
> URL: https://issues.apache.org/jira/browse/LUCENE-7292
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Affects Versions: 6.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: 6.x, master (7.0), 6.3, 6.1
>
> Attachments: LUCENE-7292.patch, LUCENE-7292.patch, LUCENE-7292.patch, 
> LUCENE-7292.patch
>
>
> Currently we pass {{-source 1.8 -target 1.8}} to javac and javadoc when 
> compiling our source code. We all know that this brings problems, because 
> cross-compiling does not really work. We create class files that are able to 
> run on Java 8, but when it is compiled with java 9, it is not sure that some 
> code may use Java 9 APIs that are not available in Java 8. Javac prints a 
> warning about this (it complains about the bootclasspath not pointing to JDK 
> 8 when used with source/target 1.8).
> Java 8 is the last version of Java that has this trap. From Java 9 on, 
> instead of passing source and target, the recommended way is to pass a single 
> {{-release 8}} parameter to javac (see http://openjdk.java.net/jeps/247). 
> This solves the bootsclasspath problem, because it has all the previous java 
> versions as "signatures" (like forbiddenapis), including deprecated APIs,... 
> everything included. You can find this in the {{$JAVA_HOME/lib/ct.sym}} file 
> (which is a ZIP file, so you can open it with a ZIP tool of your choice). In 
> Java 9+, this file also contains all old APIs from Java 6+.
> When invoking the compiler with {{-release 8}}, there is no risk of 
> accidentally using API from newer versions.
> The migration here is quite simple: As we require Java 8 already, there is 
> (theoretically) no need to pass source and target anymore. It is enough to 
> just pass {{-release 8}} if we detect Java 9 as compiling JVM. Nevertheless I 
> plan to do the following:
> - remove properties {{javac.source}} and {{javac.target}} from Ant build
> - add {{javac.release}} property and define it to be "8" (not "1.8", this is 
> new version styling that also works with Java 8+ already)
> - remove attributes in the {{}} calls
> - add a new Ant property {{javac.release.args}} that is dynamically evaluated 
> inside our compile macro: On Java 9 it evaluates to {{-release 
> $\{javac.release\}}}, for java 8 it uses {{-source $\{javac.release\} -target 
> $\{javac.release\}}} for backwards compatibility
> - pass this new arg to javac as {{}}
> By this we could theoretically remove the check from smoketester about the 
> compiling JDK (the MANIFEST check), because although compiled with Java 9, 
> the class files were actually compiled against the old Java API from ct.sym 
> file.
> I will also align the warnings to reenable {{-Xlint:options}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-09-19 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-9536:
--

Assignee: Varun Thacker

Varun: can you take a look at this and sanity check my analysis here?

> OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?
> ---
>
> Key: SOLR-9536
> URL: https://issues.apache.org/jira/browse/SOLR-9536
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Varun Thacker
>
> On IRC, a 6.2.0 user reported getting an NPE from 
> SnapShooter.deleteOldBackups L244, with the only other frame of the 
> stacktrace being {{lambda$createSnapAsync$1}} L196 (it was a screenshot, not 
> text easily cut/paste here)
> The problematic L244 is...
> {code}
>   if (obd.getTimestamp().isPresent()) {
> {code}
> ..and i believe the root of the issue is that while {{getTimestamp()}} is 
> declared to return an {{Optional}}, there is no guarantee that the 
> {{Optional}} instance is itself non-null...
> {code}
>private Optional timestamp;
>   public OldBackupDirectory(URI basePath, String dirName) {
> this.dirName = Preconditions.checkNotNull(dirName);
> this.basePath = Preconditions.checkNotNull(basePath);
> Matcher m = dirNamePattern.matcher(dirName);
> if (m.find()) {
>   try {
> this.timestamp = Optional.of(new 
> SimpleDateFormat(SnapShooter.DATE_FMT, Locale.ROOT).parse(m.group(1)));
>   } catch (ParseException e) {
> this.timestamp = Optional.empty();
>   }
> }
>   }
> {code}
> Allthough i'm not 100% certain, I believe the way the user was triggering 
> this bug was by configuring classic replication configured with something 
> like {{commit}} -- so that usage may be 
> neccessary to trigger the exception?
> Alternatively: perhaps this exception gets logged the *first* time anyone 
> tries to use any code that involves SnapShooter -- and after that a timestamp 
> file *is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-09-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504829#comment-15504829
 ] 

Hoss Man commented on SOLR-9536:


pretty sure the code in SOLR-7374 introduced this NPE

> OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?
> ---
>
> Key: SOLR-9536
> URL: https://issues.apache.org/jira/browse/SOLR-9536
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> On IRC, a 6.2.0 user reported getting an NPE from 
> SnapShooter.deleteOldBackups L244, with the only other frame of the 
> stacktrace being {{lambda$createSnapAsync$1}} L196 (it was a screenshot, not 
> text easily cut/paste here)
> The problematic L244 is...
> {code}
>   if (obd.getTimestamp().isPresent()) {
> {code}
> ..and i believe the root of the issue is that while {{getTimestamp()}} is 
> declared to return an {{Optional}}, there is no guarantee that the 
> {{Optional}} instance is itself non-null...
> {code}
>private Optional timestamp;
>   public OldBackupDirectory(URI basePath, String dirName) {
> this.dirName = Preconditions.checkNotNull(dirName);
> this.basePath = Preconditions.checkNotNull(basePath);
> Matcher m = dirNamePattern.matcher(dirName);
> if (m.find()) {
>   try {
> this.timestamp = Optional.of(new 
> SimpleDateFormat(SnapShooter.DATE_FMT, Locale.ROOT).parse(m.group(1)));
>   } catch (ParseException e) {
> this.timestamp = Optional.empty();
>   }
> }
>   }
> {code}
> Allthough i'm not 100% certain, I believe the way the user was triggering 
> this bug was by configuring classic replication configured with something 
> like {{commit}} -- so that usage may be 
> neccessary to trigger the exception?
> Alternatively: perhaps this exception gets logged the *first* time anyone 
> tries to use any code that involves SnapShooter -- and after that a timestamp 
> file *is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7292) Change build system to use "-release 8" instead of "-source/-target" when invoking javac

2016-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504826#comment-15504826
 ] 

ASF subversion and git services commented on LUCENE-7292:
-

Commit b67a062f9db6372cf654a4366233e953c89f2722 in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b67a062 ]

LUCENE-7292: Fix build to use "--release 8" instead of "-release 8" on Java 9 
(this changed with recent EA build b135)


> Change build system to use "-release 8" instead of "-source/-target" when 
> invoking javac
> 
>
> Key: LUCENE-7292
> URL: https://issues.apache.org/jira/browse/LUCENE-7292
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Affects Versions: 6.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: 6.1, 6.x, master (7.0)
>
> Attachments: LUCENE-7292.patch, LUCENE-7292.patch, LUCENE-7292.patch, 
> LUCENE-7292.patch
>
>
> Currently we pass {{-source 1.8 -target 1.8}} to javac and javadoc when 
> compiling our source code. We all know that this brings problems, because 
> cross-compiling does not really work. We create class files that are able to 
> run on Java 8, but when it is compiled with java 9, it is not sure that some 
> code may use Java 9 APIs that are not available in Java 8. Javac prints a 
> warning about this (it complains about the bootclasspath not pointing to JDK 
> 8 when used with source/target 1.8).
> Java 8 is the last version of Java that has this trap. From Java 9 on, 
> instead of passing source and target, the recommended way is to pass a single 
> {{-release 8}} parameter to javac (see http://openjdk.java.net/jeps/247). 
> This solves the bootsclasspath problem, because it has all the previous java 
> versions as "signatures" (like forbiddenapis), including deprecated APIs,... 
> everything included. You can find this in the {{$JAVA_HOME/lib/ct.sym}} file 
> (which is a ZIP file, so you can open it with a ZIP tool of your choice). In 
> Java 9+, this file also contains all old APIs from Java 6+.
> When invoking the compiler with {{-release 8}}, there is no risk of 
> accidentally using API from newer versions.
> The migration here is quite simple: As we require Java 8 already, there is 
> (theoretically) no need to pass source and target anymore. It is enough to 
> just pass {{-release 8}} if we detect Java 9 as compiling JVM. Nevertheless I 
> plan to do the following:
> - remove properties {{javac.source}} and {{javac.target}} from Ant build
> - add {{javac.release}} property and define it to be "8" (not "1.8", this is 
> new version styling that also works with Java 8+ already)
> - remove attributes in the {{}} calls
> - add a new Ant property {{javac.release.args}} that is dynamically evaluated 
> inside our compile macro: On Java 9 it evaluates to {{-release 
> $\{javac.release\}}}, for java 8 it uses {{-source $\{javac.release\} -target 
> $\{javac.release\}}} for backwards compatibility
> - pass this new arg to javac as {{}}
> By this we could theoretically remove the check from smoketester about the 
> compiling JDK (the MANIFEST check), because although compiled with Java 9, 
> the class files were actually compiled against the old Java API from ct.sym 
> file.
> I will also align the warnings to reenable {{-Xlint:options}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9536) OldBackupDirectory timestamp init bug causes NPEs from SnapShooter?

2016-09-19 Thread Hoss Man (JIRA)
Hoss Man created SOLR-9536:
--

 Summary: OldBackupDirectory timestamp init bug causes NPEs from 
SnapShooter?
 Key: SOLR-9536
 URL: https://issues.apache.org/jira/browse/SOLR-9536
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


On IRC, a 6.2.0 user reported getting an NPE from SnapShooter.deleteOldBackups 
L244, with the only other frame of the stacktrace being 
{{lambda$createSnapAsync$1}} L196 (it was a screenshot, not text easily 
cut/paste here)

The problematic L244 is...
{code}
  if (obd.getTimestamp().isPresent()) {
{code}
..and i believe the root of the issue is that while {{getTimestamp()}} is 
declared to return an {{Optional}}, there is no guarantee that the 
{{Optional}} instance is itself non-null...

{code}
   private Optional timestamp;

  public OldBackupDirectory(URI basePath, String dirName) {
this.dirName = Preconditions.checkNotNull(dirName);
this.basePath = Preconditions.checkNotNull(basePath);
Matcher m = dirNamePattern.matcher(dirName);
if (m.find()) {
  try {
this.timestamp = Optional.of(new SimpleDateFormat(SnapShooter.DATE_FMT, 
Locale.ROOT).parse(m.group(1)));
  } catch (ParseException e) {
this.timestamp = Optional.empty();
  }
}
  }
{code}

Allthough i'm not 100% certain, I believe the way the user was triggering this 
bug was by configuring classic replication configured with something like 
{{commit}} -- so that usage may be neccessary 
to trigger the exception?

Alternatively: perhaps this exception gets logged the *first* time anyone tries 
to use any code that involves SnapShooter -- and after that a timestamp file 
*is* created and teh problem neer manifests itself again?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7292) Change build system to use "-release 8" instead of "-source/-target" when invoking javac

2016-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504824#comment-15504824
 ] 

ASF subversion and git services commented on LUCENE-7292:
-

Commit 3712bf58196cd0bd56fad213547dee12029e7cbf in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3712bf5 ]

LUCENE-7292: Fix build to use "--release 8" instead of "-release 8" on Java 9 
(this changed with recent EA build b135)


> Change build system to use "-release 8" instead of "-source/-target" when 
> invoking javac
> 
>
> Key: LUCENE-7292
> URL: https://issues.apache.org/jira/browse/LUCENE-7292
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Affects Versions: 6.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: 6.1, 6.x, master (7.0)
>
> Attachments: LUCENE-7292.patch, LUCENE-7292.patch, LUCENE-7292.patch, 
> LUCENE-7292.patch
>
>
> Currently we pass {{-source 1.8 -target 1.8}} to javac and javadoc when 
> compiling our source code. We all know that this brings problems, because 
> cross-compiling does not really work. We create class files that are able to 
> run on Java 8, but when it is compiled with java 9, it is not sure that some 
> code may use Java 9 APIs that are not available in Java 8. Javac prints a 
> warning about this (it complains about the bootclasspath not pointing to JDK 
> 8 when used with source/target 1.8).
> Java 8 is the last version of Java that has this trap. From Java 9 on, 
> instead of passing source and target, the recommended way is to pass a single 
> {{-release 8}} parameter to javac (see http://openjdk.java.net/jeps/247). 
> This solves the bootsclasspath problem, because it has all the previous java 
> versions as "signatures" (like forbiddenapis), including deprecated APIs,... 
> everything included. You can find this in the {{$JAVA_HOME/lib/ct.sym}} file 
> (which is a ZIP file, so you can open it with a ZIP tool of your choice). In 
> Java 9+, this file also contains all old APIs from Java 6+.
> When invoking the compiler with {{-release 8}}, there is no risk of 
> accidentally using API from newer versions.
> The migration here is quite simple: As we require Java 8 already, there is 
> (theoretically) no need to pass source and target anymore. It is enough to 
> just pass {{-release 8}} if we detect Java 9 as compiling JVM. Nevertheless I 
> plan to do the following:
> - remove properties {{javac.source}} and {{javac.target}} from Ant build
> - add {{javac.release}} property and define it to be "8" (not "1.8", this is 
> new version styling that also works with Java 8+ already)
> - remove attributes in the {{}} calls
> - add a new Ant property {{javac.release.args}} that is dynamically evaluated 
> inside our compile macro: On Java 9 it evaluates to {{-release 
> $\{javac.release\}}}, for java 8 it uses {{-source $\{javac.release\} -target 
> $\{javac.release\}}} for backwards compatibility
> - pass this new arg to javac as {{}}
> By this we could theoretically remove the check from smoketester about the 
> compiling JDK (the MANIFEST check), because although compiled with Java 9, 
> the class files were actually compiled against the old Java API from ct.sym 
> file.
> I will also align the warnings to reenable {{-Xlint:options}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-7292) Change build system to use "-release 8" instead of "-source/-target" when invoking javac

2016-09-19 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reopened LUCENE-7292:
---

Build 135 of Java 9 changed to use "long GNU style options", so {{\-release}} 
changed to {{--release}}. Reopnening to fix build.xml for Java 9. This also 
requires to update Jenkins to at least this build.

See 
http://mail.openjdk.java.net/pipermail/compiler-dev/2016-September/010358.html
And: 
http://mail.openjdk.java.net/pipermail/compiler-dev/2016-September/010357.html

> Change build system to use "-release 8" instead of "-source/-target" when 
> invoking javac
> 
>
> Key: LUCENE-7292
> URL: https://issues.apache.org/jira/browse/LUCENE-7292
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Affects Versions: 6.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: 6.1, 6.x, master (7.0)
>
> Attachments: LUCENE-7292.patch, LUCENE-7292.patch, LUCENE-7292.patch, 
> LUCENE-7292.patch
>
>
> Currently we pass {{-source 1.8 -target 1.8}} to javac and javadoc when 
> compiling our source code. We all know that this brings problems, because 
> cross-compiling does not really work. We create class files that are able to 
> run on Java 8, but when it is compiled with java 9, it is not sure that some 
> code may use Java 9 APIs that are not available in Java 8. Javac prints a 
> warning about this (it complains about the bootclasspath not pointing to JDK 
> 8 when used with source/target 1.8).
> Java 8 is the last version of Java that has this trap. From Java 9 on, 
> instead of passing source and target, the recommended way is to pass a single 
> {{-release 8}} parameter to javac (see http://openjdk.java.net/jeps/247). 
> This solves the bootsclasspath problem, because it has all the previous java 
> versions as "signatures" (like forbiddenapis), including deprecated APIs,... 
> everything included. You can find this in the {{$JAVA_HOME/lib/ct.sym}} file 
> (which is a ZIP file, so you can open it with a ZIP tool of your choice). In 
> Java 9+, this file also contains all old APIs from Java 6+.
> When invoking the compiler with {{-release 8}}, there is no risk of 
> accidentally using API from newer versions.
> The migration here is quite simple: As we require Java 8 already, there is 
> (theoretically) no need to pass source and target anymore. It is enough to 
> just pass {{-release 8}} if we detect Java 9 as compiling JVM. Nevertheless I 
> plan to do the following:
> - remove properties {{javac.source}} and {{javac.target}} from Ant build
> - add {{javac.release}} property and define it to be "8" (not "1.8", this is 
> new version styling that also works with Java 8+ already)
> - remove attributes in the {{}} calls
> - add a new Ant property {{javac.release.args}} that is dynamically evaluated 
> inside our compile macro: On Java 9 it evaluates to {{-release 
> $\{javac.release\}}}, for java 8 it uses {{-source $\{javac.release\} -target 
> $\{javac.release\}}} for backwards compatibility
> - pass this new arg to javac as {{}}
> By this we could theoretically remove the check from smoketester about the 
> compiling JDK (the MANIFEST check), because although compiled with Java 9, 
> the class files were actually compiled against the old Java API from ct.sym 
> file.
> I will also align the warnings to reenable {{-Xlint:options}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7398) Nested Span Queries are buggy

2016-09-19 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504686#comment-15504686
 ] 

Paul Elschot edited comment on LUCENE-7398 at 9/19/16 9:22 PM:
---

The idea is to allow full backward compatibility, as well as more matching 
methods:

UNORDERED_LAZY is the current unordered,
UNORDERED_STARTPOS is even simpler, it only uses span start positions, so it 
should be complete.
ORDERED_LAZY is the current ordered,
ORDERED_LOOKAHEAD is in the patch of 14 August 2016,
ORDERED_STARTPOS also only uses start positions, so it should be complete.

The complete ORDERED and UNORDERED cases that use start and end positions and 
need backtracking are left for later.

Comments?


was (Author: paul.elsc...@xs4all.nl):
The idea is to allow full backward compatibility, as well as more matching 
methods:

UNORDERED_LAZY is the current unordered,
UNORDERED_STARTPOS is even simpler, it only uses span start positions, so it 
should be complete.
ORDERED_LAZY is the current ordered,
ORDERED_LOOKAHEAD is in the patch of 14 August 2016,
ORDERED_STARTPOS is also only uses start positions, so it should be complete.

The complete ORDERED and UNORDERED cases that use start and end positions and 
need backtracking are left for later.

Comments?

> Nested Span Queries are buggy
> -
>
> Key: LUCENE-7398
> URL: https://issues.apache.org/jira/browse/LUCENE-7398
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5, 6.x
>Reporter: Christoph Goller
>Assignee: Alan Woodward
>Priority: Critical
> Attachments: LUCENE-7398-20160814.patch, LUCENE-7398.patch, 
> LUCENE-7398.patch, TestSpanCollection.java
>
>
> Example for a nested SpanQuery that is not working:
> Document: Human Genome Organization , HUGO , is trying to coordinate gene 
> mapping research worldwide.
> Query: spanNear([body:coordinate, spanOr([spanNear([body:gene, body:mapping], 
> 0, true), body:gene]), body:research], 0, true)
> The query should match "coordinate gene mapping research" as well as 
> "coordinate gene research". It does not match  "coordinate gene mapping 
> research" with Lucene 5.5 or 6.1, it did however match with Lucene 4.10.4. It 
> probably stopped working with the changes on SpanQueries in 5.3. I will 
> attach a unit test that shows the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7432) TestIndexWriterOnError.testCheckpoint fails on IBM J9

2016-09-19 Thread Kevin Langman (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504726#comment-15504726
 ] 

Kevin Langman commented on LUCENE-7432:
---

I just completed a test using "ant test -Dtests.nightly=true -Dtests.slow=true" 
and the IBM Java 8.0.3.12 release candidate. It seems to have passed all tests 
with the exception of two tests where I ran out of disk space. I tried those 
two tests with a different JVM and got the same failure.

My test ended with this message: "There were test failures: 433 suites (1 
ignored), 3563 tests, 2 errors, 44 ignored (38 assumptions) [seed: 
61D5ABA9404A037E]"

At this point I believe that IBM Java 8.0.3.12 shows no issues running 
Lucene/Solr.

The IBM Java 8.0.3.12 is scheduled to be released at the end of this month or 
early October. Once it's released I would welcome any feedback you might have 
on using IBM's Java with Lucene/Solr. Thanks!

> TestIndexWriterOnError.testCheckpoint fails on IBM J9
> -
>
> Key: LUCENE-7432
> URL: https://issues.apache.org/jira/browse/LUCENE-7432
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>  Labels: IBM-J9
>
> Not sure if this is a J9 issue or a Lucene issue, but using this version of 
> J9:
> {noformat}
> 09:26 $ java -version
> java version "1.8.0"
> Java(TM) SE Runtime Environment (build pxa6480sr3fp10-20160720_02(SR3fp10))
> IBM J9 VM (build 2.8, JRE 1.8.0 Linux amd64-64 Compressed References 
> 20160719_312156 (JIT enabled, AOT enabled)
> J9VM - R28_Java8_SR3_20160719_1144_B312156
> JIT  - tr.r14.java_20160629_120284.01
> GC   - R28_Java8_SR3_20160719_1144_B312156_CMPRSS
> J9CL - 20160719_312156)
> JCL - 20160719_01 based on Oracle jdk8u101-b13
> {noformat}
> This test failure seems to reproduce:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterOnVMError -Dtests.method=testCheckpoint 
> -Dtests.seed=FAB0DC147AFDBF4E -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.locale=kn -Dtests.timezone=Australia/South -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR196s | TestIndexWriterOnVMError.testCheckpoint <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: 
> MockDirectoryWrapper: cannot close: there are still 9 open files: 
> {_2_Asserting_0.pos=1, _2_Asserting_0.dvd=1, _2.fdt=1, _2_Asserting_0.doc=1, 
> _2_Asserting_0.tim=1, _2.nvd=1, _2.tvd=1, _3.cfs=1, _2.dim=1}
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([FAB0DC147AFDBF4E:FBA18A7C5B16548D]:0)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841)
>[junit4]>  at 
> org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:89)
>[junit4]>  at 
> org.apache.lucene.index.TestIndexWriterOnVMError.testCheckpoint(TestIndexWriterOnVMError.java:280)
>[junit4]>  at java.lang.Thread.run(Thread.java:785)
>[junit4]> Caused by: java.lang.RuntimeException: unclosed IndexInput: 
> _2.dim
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsReader.(Lucene60PointsReader.java:85)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene60.Lucene60PointsFormat.fieldsReader(Lucene60PointsFormat.java:104)
>[junit4]>  at 
> org.apache.lucene.codecs.asserting.AssertingPointsFormat.fieldsReader(AssertingPointsFormat.java:66)
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:128)
>[junit4]>  at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:74)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197)
>[junit4]>  at 
> org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:460)
>[junit4]>  at 
> org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103)
>[junit4]>  at 
> org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:175)
>[junit4]>  ... 37 more
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /l/trunk/lucene/build/core/test/J0/temp/lucene.index.TestIndexWriterOnVMError_FAB0DC147AFDBF4E-001
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62), 
> sim=ClassicSimilarity, 

[jira] [Commented] (LUCENE-7398) Nested Span Queries are buggy

2016-09-19 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504686#comment-15504686
 ] 

Paul Elschot commented on LUCENE-7398:
--

The idea is to allow full backward compatibility, as well as more matching 
methods:

UNORDERED_LAZY is the current unordered,
UNORDERED_STARTPOS is even simpler, it only uses span start positions, so it 
should be complete.
ORDERED_LAZY is the current ordered,
ORDERED_LOOKAHEAD is in the patch of 14 August 2016,
ORDERED_STARTPOS is also only uses start positions, so it should be complete.

The complete ORDERED and UNORDERED cases that use start and end positions and 
need backtracking are left for later.

Comments?

> Nested Span Queries are buggy
> -
>
> Key: LUCENE-7398
> URL: https://issues.apache.org/jira/browse/LUCENE-7398
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5, 6.x
>Reporter: Christoph Goller
>Assignee: Alan Woodward
>Priority: Critical
> Attachments: LUCENE-7398-20160814.patch, LUCENE-7398.patch, 
> LUCENE-7398.patch, TestSpanCollection.java
>
>
> Example for a nested SpanQuery that is not working:
> Document: Human Genome Organization , HUGO , is trying to coordinate gene 
> mapping research worldwide.
> Query: spanNear([body:coordinate, spanOr([spanNear([body:gene, body:mapping], 
> 0, true), body:gene]), body:research], 0, true)
> The query should match "coordinate gene mapping research" as well as 
> "coordinate gene research". It does not match  "coordinate gene mapping 
> research" with Lucene 5.5 or 6.1, it did however match with Lucene 4.10.4. It 
> probably stopped working with the changes on SpanQueries in 5.3. I will 
> attach a unit test that shows the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7398) Nested Span Queries are buggy

2016-09-19 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504663#comment-15504663
 ] 

Paul Elschot edited comment on LUCENE-7398 at 9/19/16 8:59 PM:
---

I have started on working on a SpanNearQuery that contains this:
{code}
  /** Specifies how clauses are to occur near each other in matching documents. 
*/
  public static enum MatchNear {

/** Use this method for clauses that match when they are not ordered,
 * and the slop should be determined between the end and start positions of 
all clauses.
 * When the subspans vary in length, some matches may not be found.
 */
UNORDERED_LAZY,

/** Use this method for clauses that match when they are not ordered,
 * and the slop should be determined between the start positions of the 
first and last matching clauses.
 */
UNORDERED_STARTPOS,

/** Use this method for clauses that can match when they are ordered and 
span collection is needed,
 * and the slop should be determined between the end and start positions of 
the clauses.
 * When the subspans vary in length, some matches may not be found.
 */
ORDERED_LAZY,

/** Use this method for clauses that can match when they are ordered and 
span collection is needed,
 * and the slop should be determined between the end and start positions of 
the clauses.
 * When the subspans vary in length, some matches may not be found,
 * however this method finds more matches than {@link ORDERED_LAZY}.
 */
ORDERED_LOOKAHEAD,

/** Use this method for clauses that match when they are ordered,
 * and the slop should be determined between the start positions of the 
first and last matching clauses.
 */
ORDERED_STARTPOS
  }

{code}



was (Author: paul.elsc...@xs4all.nl):
I have started on working on a SpanNearQuery that contains this:
{code}
  /** Specifies how clauses are to occur near each other in matching documents. 
*/
  public static enum MatchNear {

/** Use this method for clauses that match when they are not ordered,
 * and the slop should be determined between the end and start positions of 
all clauses.
 * When the subspans vary in length, some matches may not be found.
 */
UNORDERED,

/** Use this method for clauses that match when they are not ordered,
 * and the slop should be determined between the start positions of the 
first and last matching clauses.
 */
UNORDERED_STARTPOS,

/** Use this method for clauses that can match when they are ordered and 
span collection is needed,
 * and the slop should be determined between the end and start positions of 
the clauses.
 * When the subspans vary in length, some matches may not be found.
 */
ORDERED_LAZY,

/** Use this method for clauses that can match when they are ordered and 
span collection is needed,
 * and the slop should be determined between the end and start positions of 
the clauses.
 * When the subspans vary in length, some matches may not be found,
 * however this method finds more matches than {@link ORDERED_LAZY}.
 */
ORDERED_LOOKAHEAD,

/** Use this method for clauses that match when they are ordered,
 * and the slop should be determined between the start positions of the 
first and last matching clauses.
 */
ORDERED_STARTPOS
  }

{code}


> Nested Span Queries are buggy
> -
>
> Key: LUCENE-7398
> URL: https://issues.apache.org/jira/browse/LUCENE-7398
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5, 6.x
>Reporter: Christoph Goller
>Assignee: Alan Woodward
>Priority: Critical
> Attachments: LUCENE-7398-20160814.patch, LUCENE-7398.patch, 
> LUCENE-7398.patch, TestSpanCollection.java
>
>
> Example for a nested SpanQuery that is not working:
> Document: Human Genome Organization , HUGO , is trying to coordinate gene 
> mapping research worldwide.
> Query: spanNear([body:coordinate, spanOr([spanNear([body:gene, body:mapping], 
> 0, true), body:gene]), body:research], 0, true)
> The query should match "coordinate gene mapping research" as well as 
> "coordinate gene research". It does not match  "coordinate gene mapping 
> research" with Lucene 5.5 or 6.1, it did however match with Lucene 4.10.4. It 
> probably stopped working with the changes on SpanQueries in 5.3. I will 
> attach a unit test that shows the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7398) Nested Span Queries are buggy

2016-09-19 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504663#comment-15504663
 ] 

Paul Elschot commented on LUCENE-7398:
--

I have started on working on a SpanNearQuery that contains this:
{code}
  /** Specifies how clauses are to occur near each other in matching documents. 
*/
  public static enum MatchNear {

/** Use this method for clauses that match when they are not ordered,
 * and the slop should be determined between the end and start positions of 
all clauses.
 * When the subspans vary in length, some matches may not be found.
 */
UNORDERED,

/** Use this method for clauses that match when they are not ordered,
 * and the slop should be determined between the start positions of the 
first and last matching clauses.
 */
UNORDERED_STARTPOS,

/** Use this method for clauses that can match when they are ordered and 
span collection is needed,
 * and the slop should be determined between the end and start positions of 
the clauses.
 * When the subspans vary in length, some matches may not be found.
 */
ORDERED_LAZY,

/** Use this method for clauses that can match when they are ordered and 
span collection is needed,
 * and the slop should be determined between the end and start positions of 
the clauses.
 * When the subspans vary in length, some matches may not be found,
 * however this method finds more matches than {@link ORDERED_LAZY}.
 */
ORDERED_LOOKAHEAD,

/** Use this method for clauses that match when they are ordered,
 * and the slop should be determined between the start positions of the 
first and last matching clauses.
 */
ORDERED_STARTPOS
  }

{code}


> Nested Span Queries are buggy
> -
>
> Key: LUCENE-7398
> URL: https://issues.apache.org/jira/browse/LUCENE-7398
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5, 6.x
>Reporter: Christoph Goller
>Assignee: Alan Woodward
>Priority: Critical
> Attachments: LUCENE-7398-20160814.patch, LUCENE-7398.patch, 
> LUCENE-7398.patch, TestSpanCollection.java
>
>
> Example for a nested SpanQuery that is not working:
> Document: Human Genome Organization , HUGO , is trying to coordinate gene 
> mapping research worldwide.
> Query: spanNear([body:coordinate, spanOr([spanNear([body:gene, body:mapping], 
> 0, true), body:gene]), body:research], 0, true)
> The query should match "coordinate gene mapping research" as well as 
> "coordinate gene research". It does not match  "coordinate gene mapping 
> research" with Lucene 5.5 or 6.1, it did however match with Lucene 4.10.4. It 
> probably stopped working with the changes on SpanQueries in 5.3. I will 
> attach a unit test that shows the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3551 - Unstable!

2016-09-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3551/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:58890","node_name":"127.0.0.1:58890_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/35)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:58885;,   
"core":"c8n_1x3_lf_shard1_replica3",   "node_name":"127.0.0.1:58885_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:58874;,   "node_name":"127.0.0.1:58874_",  
 "state":"down"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:58890;,   "node_name":"127.0.0.1:58890_",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:58890","node_name":"127.0.0.1:58890_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/35)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:58885;,
  "core":"c8n_1x3_lf_shard1_replica3",
  "node_name":"127.0.0.1:58885_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:58874;,
  "node_name":"127.0.0.1:58874_",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:58890;,
  "node_name":"127.0.0.1:58890_",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([A04FC444BDE6C17F:281BFB9E131AAC87]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:170)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 1396 - Still Unstable

2016-09-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1396/

2 tests failed.
FAILED:  org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([BAE64CA36CFAE2E:3676CAE60E21F05E]:0)
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:252)
at 
org.apache.mina.transport.socket.nio.NioSocketAcceptor.open(NioSocketAcceptor.java:49)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.registerHandles(AbstractPollingIoAcceptor.java:525)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor.access$200(AbstractPollingIoAcceptor.java:67)
at 
org.apache.mina.core.polling.AbstractPollingIoAcceptor$Acceptor.run(AbstractPollingIoAcceptor.java:409)
at 
org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds

Error Message:
soft529 wasn't fast enough

Stack Trace:
java.lang.AssertionError: soft529 wasn't fast enough
at 
__randomizedtesting.SeedInfo.seed([BAE64CA36CFAE2E:5A7A9D4A87BC9E89]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds(SoftAutoCommitTest.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Commented] (SOLR-9512) CloudSolrClient's cluster state cache can break direct updates to leaders

2016-09-19 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504502#comment-15504502
 ] 

Alan Woodward commented on SOLR-9512:
-

bq. The patch contains way more changes than you mentioned.

I ... don't think it does?  It makes the changes I described above to 
CloudSolrClient, and adds a test case.  What else is there?

bq. The following is not the way we should invalidate the cache.

How so?

> CloudSolrClient's cluster state cache can break direct updates to leaders
> -
>
> Key: SOLR-9512
> URL: https://issues.apache.org/jira/browse/SOLR-9512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9512.patch
>
>
> This is the root cause of SOLR-9305 and (at least some of) SOLR-9390.  The 
> process goes something like this:
> Documents are added to the cluster via a CloudSolrClient, with 
> directUpdatesToLeadersOnly set to true.  CSC caches its view of the 
> DocCollection.  The leader then goes down, and is reassigned.  Next time 
> documents are added, CSC checks its cache again, and gets the old view of the 
> DocCollection.  It then tries to send the update directly to the old, now 
> down, leader, and we get ConnectionRefused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9535) SolrClients should have protected constructors

2016-09-19 Thread Jason Gerlowski (JIRA)
Jason Gerlowski created SOLR-9535:
-

 Summary: SolrClients should have protected constructors
 Key: SOLR-9535
 URL: https://issues.apache.org/jira/browse/SOLR-9535
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ
Affects Versions: 6.x
Reporter: Jason Gerlowski
Priority: Minor
 Fix For: 6.x


Recent SolrJ changes (SOLR-8097) resulted in {{SolrClient}} ending up with 
{{protected}} ctors.  This achieved the purpose at the time, and steered 
consumers towards using the *Builder types.  However the change was overly 
restrictive, as this visibility prevents consumers from extending 
{{SolrClient}} in any meaningful way.

This issue involves changing the visibility of the SolrClient "kitchen sink" 
ctors to better support extension.

(See recent discussion on SOLR-8097 for more discussion on this topic.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_102) - Build # 462 - Unstable!

2016-09-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/462/
Java: 32bit/jdk1.8.0_102 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
[C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_E5431165590B9002-001\solr-instance-013\.\collection1\data\,
 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_E5431165590B9002-001\solr-instance-013\.\collection1\data\index.20160919100346256,
 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_E5431165590B9002-001\solr-instance-013\.\collection1\data\snapshot_metadata,
 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_E5431165590B9002-001\solr-instance-013\.\collection1\data\index.20160919100346029]
 expected:<3> but was:<4>

Stack Trace:
java.lang.AssertionError: 
[C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_E5431165590B9002-001\solr-instance-013\.\collection1\data\,
 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_E5431165590B9002-001\solr-instance-013\.\collection1\data\index.20160919100346256,
 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_E5431165590B9002-001\solr-instance-013\.\collection1\data\snapshot_metadata,
 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_E5431165590B9002-001\solr-instance-013\.\collection1\data\index.20160919100346029]
 expected:<3> but was:<4>
at 
__randomizedtesting.SeedInfo.seed([E5431165590B9002:1230FF3D9FE33FE4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:904)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1336)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Commented] (SOLR-9512) CloudSolrClient's cluster state cache can break direct updates to leaders

2016-09-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504227#comment-15504227
 ] 

Noble Paul commented on SOLR-9512:
--

The following is not the way we should invalidate the cache. 
{code}
 if (response.getServer().equals(url) == false) {
// we didn't hit our first-preference server, which means that our 
cached
// collection state is no longer valid
invalidateCollectionState(collection);
  }
{code}


> CloudSolrClient's cluster state cache can break direct updates to leaders
> -
>
> Key: SOLR-9512
> URL: https://issues.apache.org/jira/browse/SOLR-9512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9512.patch
>
>
> This is the root cause of SOLR-9305 and (at least some of) SOLR-9390.  The 
> process goes something like this:
> Documents are added to the cluster via a CloudSolrClient, with 
> directUpdatesToLeadersOnly set to true.  CSC caches its view of the 
> DocCollection.  The leader then goes down, and is reassigned.  Next time 
> documents are added, CSC checks its cache again, and gets the old view of the 
> DocCollection.  It then tries to send the update directly to the old, now 
> down, leader, and we get ConnectionRefused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9512) CloudSolrClient's cluster state cache can break direct updates to leaders

2016-09-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504167#comment-15504167
 ] 

Noble Paul edited comment on SOLR-9512 at 9/19/16 6:02 PM:
---

[~romseygeek] The patch contains way more changes than you mentioned. You 
committed it without any review from the people who are collaborating with you. 

If there are other people collaborating on a ticket, the general protocol is 
that you submit a patch with the changes explained and give some review time 
before committing stuff.

You submitted a patch and 3 hours later you committed it


was (Author: noble.paul):
[~romseygeek] The patch contains way more changes than you mentioned. You 
committed it without any review from the people who are collaborating with you. 

If there are other people collaborating on a ticket, the general protocol is 
that you submit a patch with the changes explained and give some review time 
before committing stuff.

> CloudSolrClient's cluster state cache can break direct updates to leaders
> -
>
> Key: SOLR-9512
> URL: https://issues.apache.org/jira/browse/SOLR-9512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9512.patch
>
>
> This is the root cause of SOLR-9305 and (at least some of) SOLR-9390.  The 
> process goes something like this:
> Documents are added to the cluster via a CloudSolrClient, with 
> directUpdatesToLeadersOnly set to true.  CSC caches its view of the 
> DocCollection.  The leader then goes down, and is reassigned.  Next time 
> documents are added, CSC checks its cache again, and gets the old view of the 
> DocCollection.  It then tries to send the update directly to the old, now 
> down, leader, and we get ConnectionRefused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9512) CloudSolrClient's cluster state cache can break direct updates to leaders

2016-09-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504167#comment-15504167
 ] 

Noble Paul commented on SOLR-9512:
--

[~romseygeek] The patch contains way more changes than you mentioned. You 
committed it without any review from the people who are collaborating with you. 

If there are other people collaborating on a ticket, the general protocol is 
that you submit a patch with the changes explained and give some review time 
before committing stuff.

> CloudSolrClient's cluster state cache can break direct updates to leaders
> -
>
> Key: SOLR-9512
> URL: https://issues.apache.org/jira/browse/SOLR-9512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9512.patch
>
>
> This is the root cause of SOLR-9305 and (at least some of) SOLR-9390.  The 
> process goes something like this:
> Documents are added to the cluster via a CloudSolrClient, with 
> directUpdatesToLeadersOnly set to true.  CSC caches its view of the 
> DocCollection.  The leader then goes down, and is reassigned.  Next time 
> documents are added, CSC checks its cache again, and gets the old view of the 
> DocCollection.  It then tries to send the update directly to the old, now 
> down, leader, and we get ConnectionRefused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.2-Linux (64bit/jdk1.8.0_102) - Build # 41 - Failure!

2016-09-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.2-Linux/41/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 12643 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-6.2-Linux/solr/build/solr-core/test/temp/junit4-J1-20160919_170201_724.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: Java heap space
   [junit4] Dumping heap to 
/home/jenkins/workspace/Lucene-Solr-6.2-Linux/heapdumps/java_pid16463.hprof ...
   [junit4] Heap dump file created [412479214 bytes in 1.077 secs]
   [junit4] <<< JVM J1: EOF 

[...truncated 11041 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-6.2-Linux/build.xml:763: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.2-Linux/build.xml:715: Some of the tests 
produced a heap dump, but did not fail. Maybe a suppressed OutOfMemoryError? 
Dumps created:
* java_pid16463.hprof

Total time: 63 minutes 4 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-9526) data_driven configs defaults to "strings" for unmapped fields, makes most fields containing "textual content" unsearchable, breaks tutorial examples

2016-09-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504104#comment-15504104
 ] 

Steve Rowe commented on SOLR-9526:
--

+1 to hoss's suggested changes

> data_driven configs defaults to "strings" for unmapped fields, makes most 
> fields containing "textual content" unsearchable, breaks tutorial examples
> 
>
> Key: SOLR-9526
> URL: https://issues.apache.org/jira/browse/SOLR-9526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> James Pritchett pointed out on the solr-user list that this sample query from 
> the quick start tutorial matched no docs (even though the tutorial text says 
> "The above request returns only one document")...
> http://localhost:8983/solr/gettingstarted/select?wt=json=true=name:foundation
> The root problem seems to be that the add-unknown-fields-to-the-schema chain 
> in data_driven_schema_configs is configured with...
> {code}
> strings
> {code}
> ...and the "strings" type uses StrField and is not tokenized.
> 
> Original thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201609.mbox/%3ccac-n2zrpsspfnk43agecspchc5b-0ff25xlfnzogyuvyg2d...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6748) Additional resources to the site to help new Solr users ramp up quicker

2016-09-19 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-6748.
--
Resolution: Won't Fix

> Additional resources to the site to help new Solr users ramp up quicker
> ---
>
> Key: SOLR-6748
> URL: https://issues.apache.org/jira/browse/SOLR-6748
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Xavier Morera
>
> I would like to request the addition of an online training I created for 
> Pluralsight called *Getting Started with Enterprise Search using Apache Solr* 
> in the following page: http://lucene.apache.org/solr/resources.html
> It is not exactly a video only, it is an online training so no idea if it 
> should be added beneath videos or separately.
> It aims to take a developer with absolutely no knowledge of Solr or even 
> search engines, to take them into being able to create a basic POC style 
> application with Solr in the backend. A few thousand people have watched it 
> and I have received very positive feedback on how it has helped people get 
> started very quickly and reduce the entry level barrier.  
> Is this possible? The url of the training is:
> http://www.pluralsight.com/courses/table-of-contents/enterprise-search-using-apache-solr
> I believe it will help a lot of people get started quicker.
> Here is the full story of how this training came to be:
> A while back I was a Solr total rookie, but I knew I needed it for one of my 
> projects. I had a little bit of a hard time getting started, but I did after 
> a lot of hard work and working with other pretty good Solr developers.
> I then worked and created a system which is doing pretty good now. But I 
> decided that I wanted to create a resource that will help people with 
> absolutely no knowledge of Solr or search engines get started as quickly as 
> possible. And given that I am already a trainer/author at Pluralsight, who 
> focused mainly on Agile development, I thought this was the right place to 
> start helping others.
> And so I did. I have received positive feedback, and given my background as a 
> trainer I have also given it as "Solr for the Uninitiated", also for people 
> with no previous knowledge of Solr. 
> It has also been received well to the extent that I have been hired to make 
> it into a book, which I am writing at the moment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+136) - Build # 17849 - Failure!

2016-09-19 Thread Rory O'Donnell

Hi Uwe,

Mandy replied see below.

Rgds, Rory


The -release option was removed in jdk-9+135 [1]: 
https://bugs.openjdk.java.net/browse/JDK-8160851 Mandy [1] 
http://hg.openjdk.java.net/jdk9/dev/langtools/rev/047d4d42b466




On 19/09/2016 15:15, Uwe Schindler wrote:

I received this:


Hi Uwe,
Most options of javac now use the GNU style,
so it's --release instead of -release.

cheers,
Rémi

It looks like options that were added recently to Java 9's command line tools now use the more 
standard GNU style options (double slash). I will update build.xml later (reopen issue about 
"-release" switch), once I tested everything. It looks like this only affects new 
options, I just hope that other command line options like "-classpath" did not change in 
Java 9 (or they have some backwards layer implemented)!

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


-Original Message-
From: Uwe Schindler [mailto:u...@thetaphi.de]
Sent: Monday, September 19, 2016 3:17 PM
To: dev@lucene.apache.org
Cc: rory.odonn...@oracle.com
Subject: RE: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+136) -
Build # 17849 - Failure!

Hi,

I contacted the compiler group and Rory O'Donnell about this. Looks strange,
maybe option parsing broke in build 136.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


-Original Message-
From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
Sent: Monday, September 19, 2016 11:31 AM
To: dev@lucene.apache.org
Subject: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+136) - Build

#

17849 - Failure!
Importance: Low

Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17849/
Java: 64bit/jdk-9-ea+136 -XX:-UseCompressedOops -XX:+UseG1GC

No tests ran.

Build Log:
[...truncated 81 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:763: The
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:707: The
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:59: The
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build.xml:50:
The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-
build.xml:501: The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-
build.xml:1955: Compile failed; see the compiler error output for details.

Total time: 1 second
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files

were

found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



[jira] [Commented] (SOLR-9526) data_driven configs defaults to "strings" for unmapped fields, makes most fields containing "textual content" unsearchable, breaks tutorial examples

2016-09-19 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503983#comment-15503983
 ] 

Hoss Man commented on SOLR-9526:


bq. Possibly to make facets work out of the box? Just guessing.

I'm probably the biggest proponent of "featuring" & promoting faceting in solr, 
and even i think it's absurd for our recomended cofigs to promote faceting at 
the expense of basic (tokenized) field search.

Hee's my off the cuff, un tested, straw man suggestion, that seems like it 
would be 100x better then what we have now...

* change {{defaultFieldType}} back to {{text_general}}
* add this to the processor chain, *after* 
AddSchemaFieldsUpdateProcessorFactory...{code}

 
  solr.TextField
  
   
  
 
 
  ^(.*)$
  $1_str
 

{code}
* Add {{}} to the managed-schema
* ?? Add {{stored="true"}} to {{text_general}} ?? 
** All the existing fields/dynamicFields using this type set it explicitly to 
either true/false, but i think if we want to use it as the {{defaultFieldType}} 
we're going to want to set it to {{true}} on the fieldType itself so any fields 
added by AddSchemaFieldsUpdateProcessorFactory have the value stored (so end 
users can see them in search results)

This should fix the most egregious problems like what we see with the broken 
tutorial (folks add a simple "text" field containing a "name" or a "title" and 
can't search on "words" in that text field) while still supporting 
sorting/faceting on short "string" fields by using the {{_str}} variant.

I'm assuming this wouldn't break whatever "auto pick facet" stuff is in 
velocity, since i'm pretty sure it works by looking for all the 
{{solr.StrField}} fields, but if it does then that should be fixed as a 
distinct issue -- we shouldn't be breaking something as basic as "i want to 
search for a word in a field" just because it makes the velocity UI harder to 
use.


> data_driven configs defaults to "strings" for unmapped fields, makes most 
> fields containing "textual content" unsearchable, breaks tutorial examples
> 
>
> Key: SOLR-9526
> URL: https://issues.apache.org/jira/browse/SOLR-9526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> James Pritchett pointed out on the solr-user list that this sample query from 
> the quick start tutorial matched no docs (even though the tutorial text says 
> "The above request returns only one document")...
> http://localhost:8983/solr/gettingstarted/select?wt=json=true=name:foundation
> The root problem seems to be that the add-unknown-fields-to-the-schema chain 
> in data_driven_schema_configs is configured with...
> {code}
> strings
> {code}
> ...and the "strings" type uses StrField and is not tokenized.
> 
> Original thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201609.mbox/%3ccac-n2zrpsspfnk43agecspchc5b-0ff25xlfnzogyuvyg2d...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9526) data_driven configs defaults to "strings" for unmapped fields, makes most fields containing "textual content" unsearchable, breaks tutorial examples

2016-09-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503942#comment-15503942
 ] 

Steve Rowe commented on SOLR-9526:
--

I'm going to work on updating the quick start tutorial - it should be kept 
up-to-date, independently of any changes we may decide on for the data driven 
configset,

> data_driven configs defaults to "strings" for unmapped fields, makes most 
> fields containing "textual content" unsearchable, breaks tutorial examples
> 
>
> Key: SOLR-9526
> URL: https://issues.apache.org/jira/browse/SOLR-9526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> James Pritchett pointed out on the solr-user list that this sample query from 
> the quick start tutorial matched no docs (even though the tutorial text says 
> "The above request returns only one document")...
> http://localhost:8983/solr/gettingstarted/select?wt=json=true=name:foundation
> The root problem seems to be that the add-unknown-fields-to-the-schema chain 
> in data_driven_schema_configs is configured with...
> {code}
> strings
> {code}
> ...and the "strings" type uses StrField and is not tokenized.
> 
> Original thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201609.mbox/%3ccac-n2zrpsspfnk43agecspchc5b-0ff25xlfnzogyuvyg2d...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.2-Windows (64bit/jdk1.8.0_102) - Build # 11 - Still unstable!

2016-09-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.2-Windows/11/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 10 object(s) that were not released!!! [TransactionLog, 
MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
MockDirectoryWrapper, MockDirectoryWrapper, TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 10 object(s) that were not 
released!!! [TransactionLog, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, 
TransactionLog]
at __randomizedtesting.SeedInfo.seed([48109BEB7F21738A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:258)
at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11488 lines...]
   [junit4] Suite: org.apache.solr.schema.TestManagedSchemaAPI
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-6.2-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_48109BEB7F21738A-001\init-core-data-001
   [junit4]   2> 1466466 INFO  
(SUITE-TestManagedSchemaAPI-seed#[48109BEB7F21738A]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 1466468 INFO  
(SUITE-TestManagedSchemaAPI-seed#[48109BEB7F21738A]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1466468 INFO  (Thread-2779) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1466468 INFO  (Thread-2779) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1466568 INFO  
(SUITE-TestManagedSchemaAPI-seed#[48109BEB7F21738A]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:63403
   [junit4]   2> 1466568 INFO  
(SUITE-TestManagedSchemaAPI-seed#[48109BEB7F21738A]-worker) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1466569 INFO  
(SUITE-TestManagedSchemaAPI-seed#[48109BEB7F21738A]-worker) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1466576 INFO  (zkCallback-1783-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@6de7d17a 

[jira] [Commented] (SOLR-9532) BoolField always False when using shards

2016-09-19 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503737#comment-15503737
 ] 

Erick Erickson commented on SOLR-9532:
--

Duplicate of 9490?

> BoolField always False when using shards
> 
>
> Key: SOLR-9532
> URL: https://issues.apache.org/jira/browse/SOLR-9532
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
> Environment: Ubuntu
>Reporter: Gidon Junge
>Priority: Blocker
>
> After upgrading from Solr 5.5 to 6.2 I've encountered the following issue:
> If my documents contains BoolField they will be False no matter the value 
> when I use sharding
> Solr 5.5:
> http://solr5:8983/solr/bug/select?q=*%3a*
> EQUALS the response from
> http://solr5:8983/solr/bug/select?shards=solr5%3a8983%2fsolr%2fbug=*%3a*
> Yet in Solr 6.2:
> http://solr6:8983/solr/bug/select?q=*%3a*
> Does NOT EQUALS the response from:
> http://solr6:8983/solr/bug/select?shards=solr6%3a8983%2fsolr%2fbug=*%3a*
> Schema used in both cases:
> 
> 
> 
>  sortMissingLast="true" omitNorms="true"/>
> 
> 
> omitNorms="true" positionIncrementGap="0"/>
>  omitNorms="true" positionIncrementGap="0"/>
>  omitNorms="true" positionIncrementGap="0"/>
>  precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
>  omitNorms="true"/>
> 
> 
> 
>  stored="true" required="true" />
>  stored="false"/>
>   omitNorms="true" default="false"/>
>  omitNorms="true" default="false"/>
> omitNorms="true" default="true"/>
>  omitNorms="true" default="true"/>
> 
> id
> id
> 
>
> 
> TEST DOCUMENTS:
> [
> {
> "id": "demo1-empty"
> },
> {
> "id": "demo2-alltrue",
> "boolean1": "true",
> "boolean2": "true",
> "boolean3": "true",
> "boolean4": "true"
> },
> {
> "id": "demo3-alldefault",
> "boolean1": "false",
> "boolean2": "false",
> "boolean3": "true",
> "boolean4": "true"
> },
> {
> "id": "demo4-allopposite",
> "boolean1": "true",
> "boolean2": "true",
> "boolean3": "false",
> "boolean4": "false"
> }
> ]
> Correct Response received in Solr 5.5 and without shards in 6.2:
>  
> 
> demo1-empty
> 1545889354915774464
> false
> true
> false
> true
> 
> 
> demo2-alltrue
> true
> true
> true
> true
> 1545889381586305024
> 
> 
> demo3-alldefault
> false
> false
> true
> true
> 1545889419038294016
> 
> 
> demo4-allopposite
> true
> true
> false
> false
> 1545889444919246848
> 
> 
> Incorrect response received with shards parameter in Solr 6.2
> 
> 
> demo1-empty
> 1545889354915774464
> false
> false
> false
> false
> 
> 
> demo2-alltrue
> false
> false
> false
> false
> 1545889381586305024
> 
> 
> demo3-alldefault
> false
> false
> false
> false
> 1545889419038294016
> 
> 
> demo4-allopposite
> false
> false
> false
> false
> 1545889444919246848
> 
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-19 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503729#comment-15503729
 ] 

Erick Erickson commented on SOLR-9534:
--

+1

> Support quiet/verbose bin/solr options for changing log level
> -
>
> Key: SOLR-9534
> URL: https://issues.apache.org/jira/browse/SOLR-9534
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>
> Spinoff from SOLR-6677
> Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
> -V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.
> These would simply be convenience options for changing the RootLogger from 
> level INFO to DEBUG or WARN respectively. This can be done programmatically 
> in log4j at startup. 
> Could be we need to add some more package specific defaults in 
> log4j.properties to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-19 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503720#comment-15503720
 ] 

Erick Erickson commented on SOLR-8186:
--

Agreed, having the timestamp as ticks in the console rather than a 
human-readable format is disconcerting, +1 to change it to the same format as 
the rest of the log files.

bq: Do we even need to write to logs/solr.log.

IMO absolutely we do. There is so much output that having the solr.log file is 
important to answer "what did I see go by?" when chasing down problems.

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+134) - Build # 1752 - Unstable!

2016-09-19 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1752/
Java: 64bit/jdk-9-ea+134 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([8B1BD0BB1A197787:7C683EE3DCF1D861]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1331)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[jira] [Commented] (SOLR-9512) CloudSolrClient's cluster state cache can break direct updates to leaders

2016-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503651#comment-15503651
 ] 

ASF subversion and git services commented on SOLR-9512:
---

Commit f96017d9e10c665e7ab6b9161f2af08efc491946 in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f96017d ]

SOLR-9512: CloudSolrClient tries other replicas if a cached leader is down


> CloudSolrClient's cluster state cache can break direct updates to leaders
> -
>
> Key: SOLR-9512
> URL: https://issues.apache.org/jira/browse/SOLR-9512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9512.patch
>
>
> This is the root cause of SOLR-9305 and (at least some of) SOLR-9390.  The 
> process goes something like this:
> Documents are added to the cluster via a CloudSolrClient, with 
> directUpdatesToLeadersOnly set to true.  CSC caches its view of the 
> DocCollection.  The leader then goes down, and is reassigned.  Next time 
> documents are added, CSC checks its cache again, and gets the old view of the 
> DocCollection.  It then tries to send the update directly to the old, now 
> down, leader, and we get ConnectionRefused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9512) CloudSolrClient's cluster state cache can break direct updates to leaders

2016-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503653#comment-15503653
 ] 

ASF subversion and git services commented on SOLR-9512:
---

Commit 3d130097b7768a8d753476ffe26b83db070c8e20 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3d13009 ]

SOLR-9512: CloudSolrClient tries other replicas if a cached leader is down


> CloudSolrClient's cluster state cache can break direct updates to leaders
> -
>
> Key: SOLR-9512
> URL: https://issues.apache.org/jira/browse/SOLR-9512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9512.patch
>
>
> This is the root cause of SOLR-9305 and (at least some of) SOLR-9390.  The 
> process goes something like this:
> Documents are added to the cluster via a CloudSolrClient, with 
> directUpdatesToLeadersOnly set to true.  CSC caches its view of the 
> DocCollection.  The leader then goes down, and is reassigned.  Next time 
> documents are added, CSC checks its cache again, and gets the old view of the 
> DocCollection.  It then tries to send the update directly to the old, now 
> down, leader, and we get ConnectionRefused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.2.1 RC1

2016-09-19 Thread Shalin Shekhar Mangar
This vote has passed. I'll start the publishing process.

On Thu, Sep 15, 2016 at 7:37 PM, Shalin Shekhar Mangar 
wrote:

> Please vote for the first release candidate for Lucene/Solr 6.2.1
>
> The artifacts can be downloaded from:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.2.1-RC1-
> rev43ab70147eb494324a1410f7a9f16a896a59bc6f/
>
> You can run the smoke tester directly with this command:
> python3 -u dev-tools/scripts/smokeTestRelease.py https://dist.apache.
> org/repos/dist/dev/lucene/lucene-solr-6.2.1-RC1-
> rev43ab70147eb494324a1410f7a9f16a896a59bc6f/
>
> Smoke tester passed for me:
> SUCCESS! [0:29:53.545665]
>
> Here's my +1 to release.
>
> --
> Regards,
> Shalin Shekhar Mangar.
>



-- 
Regards,
Shalin Shekhar Mangar.


RE: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+136) - Build # 17849 - Failure!

2016-09-19 Thread Uwe Schindler
I received this:

> Hi Uwe,
> Most options of javac now use the GNU style,
> so it's --release instead of -release.
> 
> cheers,
> Rémi

It looks like options that were added recently to Java 9's command line tools 
now use the more standard GNU style options (double slash). I will update 
build.xml later (reopen issue about "-release" switch), once I tested 
everything. It looks like this only affects new options, I just hope that other 
command line options like "-classpath" did not change in Java 9 (or they have 
some backwards layer implemented)!

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Uwe Schindler [mailto:u...@thetaphi.de]
> Sent: Monday, September 19, 2016 3:17 PM
> To: dev@lucene.apache.org
> Cc: rory.odonn...@oracle.com
> Subject: RE: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+136) -
> Build # 17849 - Failure!
> 
> Hi,
> 
> I contacted the compiler group and Rory O'Donnell about this. Looks strange,
> maybe option parsing broke in build 136.
> 
> Uwe
> 
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
> > -Original Message-
> > From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
> > Sent: Monday, September 19, 2016 11:31 AM
> > To: dev@lucene.apache.org
> > Subject: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+136) - Build
> #
> > 17849 - Failure!
> > Importance: Low
> >
> > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17849/
> > Java: 64bit/jdk-9-ea+136 -XX:-UseCompressedOops -XX:+UseG1GC
> >
> > No tests ran.
> >
> > Build Log:
> > [...truncated 81 lines...]
> > BUILD FAILED
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:763: The
> > following error occurred while executing this line:
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:707: The
> > following error occurred while executing this line:
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:59: The
> > following error occurred while executing this line:
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build.xml:50:
> > The following error occurred while executing this line:
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-
> > build.xml:501: The following error occurred while executing this line:
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-
> > build.xml:1955: Compile failed; see the compiler error output for details.
> >
> > Total time: 1 second
> > Build step 'Invoke Ant' marked build as failure
> > Archiving artifacts
> > [WARNINGS] Skipping publisher since build result is FAILURE
> > Recording test results
> > ERROR: Step ‘Publish JUnit test result report’ failed: No test report files
> were
> > found. Configuration error?
> > Email was triggered for: Failure - Any
> > Sending email for trigger: Failure - Any
> >
> 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6677) reduce logging during Solr startup

2016-09-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503604#comment-15503604
 ] 

Jan Høydahl commented on SOLR-6677:
---

I spun off the -V and -q ideas into SOLR-9534

> reduce logging during Solr startup
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-19 Thread JIRA
Jan Høydahl created SOLR-9534:
-

 Summary: Support quiet/verbose bin/solr options for changing log 
level
 Key: SOLR-9534
 URL: https://issues.apache.org/jira/browse/SOLR-9534
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: scripts and tools
Affects Versions: 6.2
Reporter: Jan Høydahl


Spinoff from SOLR-6677

Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
-V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.

These would simply be convenience options for changing the RootLogger from 
level INFO to DEBUG or WARN respectively. This can be done programmatically in 
log4j at startup. 

Could be we need to add some more package specific defaults in log4j.properties 
to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-4305) XSS vulnerability in Solr /admin/analysis.jsp

2016-09-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-4305.
-
Resolution: Won't Fix

Closing as won't fix as we do not have JSPs anymore :)

> XSS vulnerability in Solr /admin/analysis.jsp
> -
>
> Key: SOLR-4305
> URL: https://issues.apache.org/jira/browse/SOLR-4305
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 3.6
> Environment: Solaris
>Reporter: Rob Brooks
>  Labels: security
>
> This issue was found when running solr 3.6 in solaris, in a multicore setup. 
> Each core had a cross site scripting vulnerability found at 
> /admin/analysis.jsp while testing using IBM Rational AppScan 8.5.0.1.
> Here are the details of the scan result as given by IBM Rational AppScan:
> [1 of 1] Cross-Site Scripting
> Severity: High
> Test Type: Application
> Vulnerable URL: https:///solr//admin/analysis.jsp (Parameter: 
> name)
> CVE ID(s): N/A
> CWE ID(s): 79 (parent of 83)
> Remediation Tasks: Review possible solutions for hazardous character injection
> Variant 1 of 6 [ID=19389]
> The following changes were applied to the original request:
> • Set parameter 'name's value to '" onMouseOver=alert(39846)//'
> Request/Response:
> 12/10/2012 3:33:04 PM 16/187
> POST /solr//admin/analysis.jsp HTTP/1.1
> Cookie: JSESSIONID=0D77846A894B8BB086394C396F19D0E9
> Content-Length: 96
> Accept: */*
> Accept-Language: en-us
> User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64;
> Trident/4.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 
> 3.0.30729;
> Media Center PC 6.0; Tablet PC 2.0)
> Host: :8443
> Content-Type: application/x-www-form-urlencoded
> Referer: https:///solr//admin/analysis.jsp?highlight=on
> nt=type=" onMouseOver=alert
> (39846)//=on=on=1234=on=1234
> HTTP/1.1 200 OK
> Content-Length: 1852
> Server: Apache-Coyote/1.1
> Content-Type: text/html;charset=utf-8
> Date: Mon, 10 Dec 2012 15:54:38 GMT
> 
> 
> 
> var host_name=""
> 
> 
> 
> 
> 
> Solr admin page
> 
> 
>  src="solr_small.png" alt="Solr">
> Solr Admin (Cares)
> 
> 
> cwd=/export/home/kh SolrHome=/solr//
> 
> 12/10/2012 3:33:04 PM 17/187
> HTTP caching is ON
> 
> Field Analysis
> 
> 
> 
> 
> Field
> 
> name
> type
> 
> 
> 
>  onMouseOver=alert(39846)//">
> 
> 
> 
> 
> Field value (Index)
> 
> verbose output
>  checked="true" >
> 
> highlight matches
>  checked="true" >
> 
> 
> 1234
> 
> 
> 
> 
> Field value (Query)
> 
> verbose output
>  checked="true" >
> 
> 
> 1234
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Unknown Field Type: " onMouseOver=alert(39846)//
> 
> 
> 12/10/2012 3:33:04 PM 18/187
> Validation In Response:
> • option>
> type
> 
> 
> 
>  (39846)//">
> 
> 
> 
> 
> Field value (Index)
> 
> verbose output
>  Reasoning:
> The test successfully embedded a script in the response, which will be 
> executed once the user
> activates the OnMouseOver function (i.e., hovers with the mouse cursor over 
> the vulnerable
> control). This means that the application is vulnerable to Cross-Site 
> Scripting attacks.
> CWE ID:
> 83 (child of 79)
> Vulnerable URL: https:///solr//admin/threaddump.jsp
> Total of 1 security issues in this URL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4787) Join Contrib

2016-09-19 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503587#comment-15503587
 ] 

Mikhail Khludnev commented on SOLR-4787:


sure. now we have a wunderwaffe: *filter()*
ie. 
{code}
..=-{!join from=id fromIndex=hdq to=hdquotes 
v=$hqquery}=qt_release:1 filter(qt_cnid:4 AND qt_disabled:0)...
{code}

beware that space in subordinate clause after \{!...} sometimes might not be 
recognized, thus I used {{v=$hqquery}} trick

> Join Contrib
> 
>
> Key: SOLR-4787
> URL: https://issues.apache.org/jira/browse/SOLR-4787
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.2.1
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 6.0
>
> Attachments: SOLR-4787-deadlock-fix.patch, 
> SOLR-4787-pjoin-long-keys.patch, SOLR-4787-with-testcase-fix.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4797-hjoin-multivaluekeys-nestedJoins.patch, 
> SOLR-4797-hjoin-multivaluekeys-trunk.patch
>
>
> This contrib provides a place where different join implementations can be 
> contributed to Solr. This contrib currently includes 3 join implementations. 
> The initial patch was generated from the Solr 4.3 tag. Because of changes in 
> the FieldCache API this patch will only build with Solr 4.2 or above.
> *HashSetJoinQParserPlugin aka hjoin*
> The hjoin provides a join implementation that filters results in one core 
> based on the results of a search in another core. This is similar in 
> functionality to the JoinQParserPlugin but the implementation differs in a 
> couple of important ways.
> The first way is that the hjoin is designed to work with int and long join 
> keys only. So, in order to use hjoin, int or long join keys must be included 
> in both the to and from core.
> The second difference is that the hjoin builds memory structures that are 
> used to quickly connect the join keys. So, the hjoin will need more memory 
> then the JoinQParserPlugin to perform the join.
> The main advantage of the hjoin is that it can scale to join millions of keys 
> between cores and provide sub-second response time. The hjoin should work 
> well with up to two million results from the fromIndex and tens of millions 
> of results from the main query.
> The hjoin supports the following features:
> 1) Both lucene query and PostFilter implementations. A *"cost"* > 99 will 
> turn on the PostFilter. The PostFilter will typically outperform the Lucene 
> query when the main query results have been narrowed down.
> 2) With the lucene query implementation there is an option to build the 
> filter with threads. This can greatly improve the performance of the query if 
> the main query index is very large. The "threads" parameter turns on 
> threading. For example *threads=6* will use 6 threads to build the filter. 
> This will setup a fixed threadpool with six threads to handle all hjoin 
> requests. Once the threadpool is created the hjoin will always use it to 
> build the filter. Threading does not come into play with the PostFilter.
> 3) The *size* local parameter can be used to set the initial size of the 
> hashset used to perform the join. If this is set above the number of results 
> from the fromIndex then the you can avoid hashset resizing which improves 
> performance.
> 4) Nested filter queries. The local parameter "fq" can be used to nest a 
> filter query within the join. The nested fq will filter the results of the 
> join query. This can point to another join to support nested joins.
> 5) Full caching support for the lucene query implementation. The filterCache 
> and queryResultCache should work properly even with deep nesting of joins. 
> Only the queryResultCache comes into play with the PostFilter implementation 
> because PostFilters are not cacheable in the filterCache.
> The syntax of the hjoin is similar to the JoinQParserPlugin except that the 
> plugin is referenced by the string "hjoin" rather then "join".
> fq=\{!hjoin fromIndex=collection2 from=id_i to=id_i threads=6 
> fq=$qq\}user:customer1=group:5
> The example filter query above will search the fromIndex (collection2) for 
> "user:customer1" applying the local fq parameter to filter the results. The 
> lucene filter query will be built using 6 threads. This query will generate a 
> list of values from the "from" field that will be used to filter the main 
> query. Only records from the main query, where the "to" field is present in 
> the "from" list will be included in the results.
> The solrconfig.xml in the main query core must contain the reference to the 
> hjoin.
>  

[jira] [Commented] (SOLR-7644) Add Common Daemons to bin/run for -d

2016-09-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503582#comment-15503582
 ] 

Jan Høydahl commented on SOLR-7644:
---

[~billnbell] have you done more work on this? This would perhaps be first step 
towards a real Windows installer script? {{install_solr_service.ps1}} anyone? 
:-) It would be cool if we could mimic most of the same parameters as for the 
linux service installer, and also choose some defaults for where to install on 
Windows.

> Add Common Daemons to bin/run for -d
> 
>
> Key: SOLR-7644
> URL: https://issues.apache.org/jira/browse/SOLR-7644
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.2
>Reporter: Bill Bell
>
> Why don't we change the bin/run -d to have Common Daemons? This would be a 
> great enhancement to SOLR 5.x.
> Common Daemons.
> http://commons.apache.org/proper/commons-daemon/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4787) Join Contrib

2016-09-19 Thread Vadim Ivanov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503559#comment-15503559
 ] 

Vadim Ivanov commented on SOLR-4787:


Thak you, Mikhail.
But I still have some doubts:
1. Will SOLR use filter cache, when "join" is inside ? It seems to me that 
regular join is not cached. 
2. Bjoin has fq sentence and, for example, clause like this could be written:

...=-{!bjoin from=id fromIndex=hdq to=hdquotes fq=$qq}qt_release:1
=(qt_cnid:4 AND qt_disabled:0)...

Regular solr join is without fq, so cold you drop a hint of rewriting this 
clause?



> Join Contrib
> 
>
> Key: SOLR-4787
> URL: https://issues.apache.org/jira/browse/SOLR-4787
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.2.1
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 6.0
>
> Attachments: SOLR-4787-deadlock-fix.patch, 
> SOLR-4787-pjoin-long-keys.patch, SOLR-4787-with-testcase-fix.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4797-hjoin-multivaluekeys-nestedJoins.patch, 
> SOLR-4797-hjoin-multivaluekeys-trunk.patch
>
>
> This contrib provides a place where different join implementations can be 
> contributed to Solr. This contrib currently includes 3 join implementations. 
> The initial patch was generated from the Solr 4.3 tag. Because of changes in 
> the FieldCache API this patch will only build with Solr 4.2 or above.
> *HashSetJoinQParserPlugin aka hjoin*
> The hjoin provides a join implementation that filters results in one core 
> based on the results of a search in another core. This is similar in 
> functionality to the JoinQParserPlugin but the implementation differs in a 
> couple of important ways.
> The first way is that the hjoin is designed to work with int and long join 
> keys only. So, in order to use hjoin, int or long join keys must be included 
> in both the to and from core.
> The second difference is that the hjoin builds memory structures that are 
> used to quickly connect the join keys. So, the hjoin will need more memory 
> then the JoinQParserPlugin to perform the join.
> The main advantage of the hjoin is that it can scale to join millions of keys 
> between cores and provide sub-second response time. The hjoin should work 
> well with up to two million results from the fromIndex and tens of millions 
> of results from the main query.
> The hjoin supports the following features:
> 1) Both lucene query and PostFilter implementations. A *"cost"* > 99 will 
> turn on the PostFilter. The PostFilter will typically outperform the Lucene 
> query when the main query results have been narrowed down.
> 2) With the lucene query implementation there is an option to build the 
> filter with threads. This can greatly improve the performance of the query if 
> the main query index is very large. The "threads" parameter turns on 
> threading. For example *threads=6* will use 6 threads to build the filter. 
> This will setup a fixed threadpool with six threads to handle all hjoin 
> requests. Once the threadpool is created the hjoin will always use it to 
> build the filter. Threading does not come into play with the PostFilter.
> 3) The *size* local parameter can be used to set the initial size of the 
> hashset used to perform the join. If this is set above the number of results 
> from the fromIndex then the you can avoid hashset resizing which improves 
> performance.
> 4) Nested filter queries. The local parameter "fq" can be used to nest a 
> filter query within the join. The nested fq will filter the results of the 
> join query. This can point to another join to support nested joins.
> 5) Full caching support for the lucene query implementation. The filterCache 
> and queryResultCache should work properly even with deep nesting of joins. 
> Only the queryResultCache comes into play with the PostFilter implementation 
> because PostFilters are not cacheable in the filterCache.
> The syntax of the hjoin is similar to the JoinQParserPlugin except that the 
> plugin is referenced by the string "hjoin" rather then "join".
> fq=\{!hjoin fromIndex=collection2 from=id_i to=id_i threads=6 
> fq=$qq\}user:customer1=group:5
> The example filter query above will search the fromIndex (collection2) for 
> "user:customer1" applying the local fq parameter to filter the results. The 
> lucene filter query will be built using 6 threads. This query will generate a 
> list of values from the "from" field that will be used to filter the main 
> query. Only records from the main query, where the "to" field is present in 
> the "from" list will be 

[jira] [Commented] (SOLR-5176) Chocolatey package for Windows

2016-09-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503572#comment-15503572
 ] 

Jan Høydahl commented on SOLR-5176:
---

Hi, you got no initial reply on this issue. Normally we leave to downstream 
providers to create platform-specific packages such as this. Do you have the 
skills needed to create one? I would love to have one available, which installs 
a service as well (such as SOLR-7644). Ideas?

> Chocolatey package for Windows
> --
>
> Key: SOLR-5176
> URL: https://issues.apache.org/jira/browse/SOLR-5176
> Project: Solr
>  Issue Type: Improvement
>  Components: Build
> Environment: Chocolatey (http://chocolatey.org/)
> Windows XP+
>Reporter: Andrew Pennebaker
>Priority: Minor
>
> Could we simplify the installation process for Windows users by providing a 
> Chocolatey package?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3157) custom logging

2016-09-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503543#comment-15503543
 ] 

Jan Høydahl commented on SOLR-3157:
---

This seems to be committed already, can we close this old issue?

> custom logging
> --
>
> Key: SOLR-3157
> URL: https://issues.apache.org/jira/browse/SOLR-3157
> Project: Solr
>  Issue Type: Test
>Reporter: Yonik Seeley
> Attachments: SOLR-3157.patch, jetty_threadgroup.patch
>
>
> We need custom logging to decipher tests with multiple core containers, 
> cores, in a single JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503539#comment-15503539
 ] 

Jan Høydahl commented on SOLR-8186:
---

Is this still the case?

Do we even need to write to {{logs/solr.log}} when running in foreground mode? 
If not, the {{log4j-foreground.properties}} could do CONSOLE only.

Also, why does the log format need to be different between console and file? I 
know some Windows users start Solr with NSSM in foreground mode and relies on 
NSSM to capture console logging and take care of persisting and rolling the 
logs. You would expect to find a timestamp in those logs!

{noformat}
solr.log:
2016-09-19 13:42:46.607 INFO  (main) [   ] o.e.j.u.log Logging initialized 
@361ms
2016-09-19 13:42:46.772 INFO  (main) [   ] o.e.j.s.Server jetty-9.3.8.v20160314

solr-8983-console.log:
0INFO  (main) [   ] o.e.j.u.log Logging initialized @361ms
165  INFO  (main) [   ] o.e.j.s.Server jetty-9.3.8.v20160314
{noformat}



> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9533) Reload core config when a core is reloaded

2016-09-19 Thread Gethin James (JIRA)
Gethin James created SOLR-9533:
--

 Summary: Reload core config when a core is reloaded
 Key: SOLR-9533
 URL: https://issues.apache.org/jira/browse/SOLR-9533
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.2
Reporter: Gethin James


I am reloading a core using {{coreContainer.reload(coreName)}}.  However it 
doesn't seem to reload the configuration.  I have changed solrcore.properties 
on the file system but the change doesn't get picked up.

The coreContainer.reload method seems to call:
{code}
CoreDescriptor cd = core.getCoreDescriptor();
{code}

I can't see a way to reload CoreDescriptor, so it isn't picking up my changes.  
It simply reuses the existing CoreDescriptor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9509) Fix problems in shell scripts reported by "shellcheck"

2016-09-19 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503502#comment-15503502
 ] 

Kevin Risden commented on SOLR-9509:


Another way to grab executable files:

{code}
git ls-files -s . | grep 100755 | grep -v \.java | awk '{print $4}' | sort -u > 
~/Downloads/git_executable.txt
{code}

> Fix problems in shell scripts reported by "shellcheck"
> --
>
> Key: SOLR-9509
> URL: https://issues.apache.org/jira/browse/SOLR-9509
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
> Attachments: bin_solr_shellcheck.txt, shellcheck_solr_20160915.txt, 
> shellcheck_solr_bin_bash_20160915.txt, shellcheck_solr_bin_sh_20160915.txt, 
> shellcheck_solr_usr_bin_env_bash_20160915.txt
>
>
> Running {{shellcheck}} on our shell scripts reveal various improvements we 
> should consider.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2016-09-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503479#comment-15503479
 ] 

Jan Høydahl commented on SOLR-7887:
---

What happened here? Did anyone file a bug with Hadoop? Will the upgrade to 
Hadoop3 in SOLR-9515 help?

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
> Attachments: SOLR-7887-WIP.patch
>
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+136) - Build # 17849 - Failure!

2016-09-19 Thread Uwe Schindler
Hi,

I contacted the compiler group and Rory O'Donnell about this. Looks strange, 
maybe option parsing broke in build 136.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
> Sent: Monday, September 19, 2016 11:31 AM
> To: dev@lucene.apache.org
> Subject: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+136) - Build #
> 17849 - Failure!
> Importance: Low
> 
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17849/
> Java: 64bit/jdk-9-ea+136 -XX:-UseCompressedOops -XX:+UseG1GC
> 
> No tests ran.
> 
> Build Log:
> [...truncated 81 lines...]
> BUILD FAILED
> /home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:763: The
> following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:707: The
> following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:59: The
> following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build.xml:50:
> The following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-
> build.xml:501: The following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-
> build.xml:1955: Compile failed; see the compiler error output for details.
> 
> Total time: 1 second
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> [WARNINGS] Skipping publisher since build result is FAILURE
> Recording test results
> ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
> were
> found. Configuration error?
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
> 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6082) Umbrella JIRA for Admin UI and SolrCloud.

2016-09-19 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503407#comment-15503407
 ] 

Upayavira commented on SOLR-6082:
-

[~mkhludnev] please create a separate ticket. The two UIs should be using the 
same back-end so should exhibit the same behaviour.

> Umbrella JIRA for Admin UI and SolrCloud.
> -
>
> Key: SOLR-6082
> URL: https://issues.apache.org/jira/browse/SOLR-6082
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 4.9, 6.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> It would be very helpful if the admin UI were more "cloud friendly". This is 
> an umbrella JIRA so we can collect sub-tasks as necessary. I think there 
> might be scattered JIRAs about this, let's link them in as we find them.
> [~steffkes] - I've taken the liberty of assigning it to you since you 
> expressed some interest. Feel free to assign it back if you want...
> Let's imagine that a user has a cluster with _no_ collections assigned and 
> start from there.
> Here's a simple way to set this up. Basically you follow the reference guide 
> tutorial but _don't_ define a collection.
> 1> completely delete the "collection1" directory from example
> 2> cp -r example example2
> 3> in example, execute "java -DzkRun -jar start.jar"
> 4> in example2, execute "java -Djetty.port=7574 -DzkHost=localhost:9983 -jar 
> start.jar"
> Now the "cloud link" appears. If you expand the tree view, you see the two 
> live nodes. But, there's nothing in the graph view, no cores are selectable, 
> etc.
> First problem (need to solve before any sub-jiras, so including it here): You 
> have to push a configuration directory to ZK.
> [~thetapi] The _last_ time Stefan and I started allowing files to be written 
> to Solr from the UI it was...unfortunate. I'm assuming that there's something 
> similar here. That is, we shouldn't allow pushing the Solr config _to_ 
> ZooKeeper through the Admin UI, where they'd be distributed to all the solr 
> nodes. Is that true? If this is a security issue, we can keep pushing the 
> config dirs to ZK a manual step for now...
> Once we determine how to get configurations up, we can work on the various 
> sub-jiras.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6082) Umbrella JIRA for Admin UI and SolrCloud.

2016-09-19 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503324#comment-15503324
 ] 

Mikhail Khludnev commented on SOLR-6082:


Just a note: if zookeeper is unavailable new ui freezes for a long time and 
then respond with \{\{placehoders}}. Old UI is musch responsive in such 
disaster. 

> Umbrella JIRA for Admin UI and SolrCloud.
> -
>
> Key: SOLR-6082
> URL: https://issues.apache.org/jira/browse/SOLR-6082
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 4.9, 6.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> It would be very helpful if the admin UI were more "cloud friendly". This is 
> an umbrella JIRA so we can collect sub-tasks as necessary. I think there 
> might be scattered JIRAs about this, let's link them in as we find them.
> [~steffkes] - I've taken the liberty of assigning it to you since you 
> expressed some interest. Feel free to assign it back if you want...
> Let's imagine that a user has a cluster with _no_ collections assigned and 
> start from there.
> Here's a simple way to set this up. Basically you follow the reference guide 
> tutorial but _don't_ define a collection.
> 1> completely delete the "collection1" directory from example
> 2> cp -r example example2
> 3> in example, execute "java -DzkRun -jar start.jar"
> 4> in example2, execute "java -Djetty.port=7574 -DzkHost=localhost:9983 -jar 
> start.jar"
> Now the "cloud link" appears. If you expand the tree view, you see the two 
> live nodes. But, there's nothing in the graph view, no cores are selectable, 
> etc.
> First problem (need to solve before any sub-jiras, so including it here): You 
> have to push a configuration directory to ZK.
> [~thetapi] The _last_ time Stefan and I started allowing files to be written 
> to Solr from the UI it was...unfortunate. I'm assuming that there's something 
> similar here. That is, we shouldn't allow pushing the Solr config _to_ 
> ZooKeeper through the Admin UI, where they'd be distributed to all the solr 
> nodes. Is that true? If this is a security issue, we can keep pushing the 
> config dirs to ZK a manual step for now...
> Once we determine how to get configurations up, we can work on the various 
> sub-jiras.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6677) reduce logging during Solr startup

2016-09-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503284#comment-15503284
 ] 

Noble Paul commented on SOLR-6677:
--

bq. If someone needs to debug they can turn on debugging. And we should 
probably make it easier to enable debugging as well, i.e. through a bin/solr 
start -v argument which somehow changes the loglevel to DEBUG

+1

> reduce logging during Solr startup
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: request SOLR - spatial field with Intersect and Contains functions

2016-09-19 Thread David Smiley
Hi; please ask on the solr-user list.  This is the internal dev list.
Thanks.
~ David

On Mon, Sep 19, 2016 at 8:34 AM Leo BRUVRY-LAGADEC <
leo.bruvry.laga...@partenaire-exterieur.ifremer.fr> wrote:

> Hi,
>
> I am trying spatial search in SOLR 5.0 and I don't know how to implement
> a solution for the problem I will try to explain.
>
> On a SOLR server I have indexed a collection of objects that contains
> spacial field :
>
>  multiValued="true" />
>  class="solr.SpatialRecursivePrefixTreeFieldType"
> geo="true"
> distErrPct="0.025"
> maxDistErr="0.09"
> distanceUnits="degrees" />
>
> The spatial data indexed in the field named "geo" can be ENVELOPE or
> LINESTRING :
>
> LINESTRING(-4.6837 48.5792, -4.6835 48.5788, -4.684
> 48.5788, -4.6832 48.579, -4.6837 48.5792, -4.6188 48.6265, -4.6122
> 48.63, -4.615 48.6258, -4.6125 48.6215, -4.6112 48.6218)
>
> or
>
> ENVELOPE(-5.0, -4.0, 49.0, 48.0)
>
> Actually in my application, when I do a SOLR request to get objects that
> are in a spatial area, I do something like this :
>
> q=:=(geo:"Intersects(ENVELOPE(-116.894531, 107.402344, 57.433227,
> -42.146973))")
>
> But I want to change how it work. Now, when the geo field contain an
> ENVELOPE I want to do an CONTAINS request and when it contain a
> LINESTRING I want to do an INTERSECTS request.
>
> example :
>
> If geo = ENVELOPE then q=*:*=(geo:"Contains(ENVELOPE(-116.894531,
> 107.402344, 57.433227, -42.146973))")
>
> If geo = LINESTRING then q=*:*=(geo:"Intersects(ENVELOPE(-116.894531,
> 107.402344, 57.433227, -42.146973))")
>
> How can my application know if the field contain ENVELOPE or LINESTRING ?
>
> Any idea can this be done ?
>
> Best reguards,
> Leo.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


request SOLR - spatial field with Intersect and Contains functions

2016-09-19 Thread Leo BRUVRY-LAGADEC

Hi,

I am trying spatial search in SOLR 5.0 and I don't know how to implement 
a solution for the problem I will try to explain.


On a SOLR server I have indexed a collection of objects that contains 
spacial field :


multiValued="true" />
class="solr.SpatialRecursivePrefixTreeFieldType"

   geo="true"
   distErrPct="0.025"
   maxDistErr="0.09"
   distanceUnits="degrees" />

The spatial data indexed in the field named "geo" can be ENVELOPE or 
LINESTRING :


LINESTRING(-4.6837 48.5792, -4.6835 48.5788, -4.684 
48.5788, -4.6832 48.579, -4.6837 48.5792, -4.6188 48.6265, -4.6122 
48.63, -4.615 48.6258, -4.6125 48.6215, -4.6112 48.6218)


or

ENVELOPE(-5.0, -4.0, 49.0, 48.0)

Actually in my application, when I do a SOLR request to get objects that 
are in a spatial area, I do something like this :


q=:=(geo:"Intersects(ENVELOPE(-116.894531, 107.402344, 57.433227, 
-42.146973))")


But I want to change how it work. Now, when the geo field contain an 
ENVELOPE I want to do an CONTAINS request and when it contain a 
LINESTRING I want to do an INTERSECTS request.


example :

If geo = ENVELOPE then q=*:*=(geo:"Contains(ENVELOPE(-116.894531, 
107.402344, 57.433227, -42.146973))")


If geo = LINESTRING then q=*:*=(geo:"Intersects(ENVELOPE(-116.894531, 
107.402344, 57.433227, -42.146973))")


How can my application know if the field contain ENVELOPE or LINESTRING ?

Any idea can this be done ?

Best reguards,
Leo.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4968) Several ToParentBlockJoinQuery/Collector issues

2016-09-19 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503273#comment-15503273
 ] 

Michael McCandless commented on LUCENE-4968:


bq.  What do you think about replacing it to meaningful advise to check parent 
and child filter for empty intersection?

Or maybe do both?  Patch?

> Several ToParentBlockJoinQuery/Collector issues
> ---
>
> Key: LUCENE-4968
> URL: https://issues.apache.org/jira/browse/LUCENE-4968
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/join
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 4.3.1, 6.0
>
> Attachments: LUCENE-4968.patch
>
>
> I hit several issues with ToParentBlockJoinQuery/Collector:
>   * If a given Query sometimes has no child matches then we could hit
> AIOOBE, but should just get 0 children for that parent
>   * TPBJC.getTopGroups incorrectly throws IllegalArgumentException
> when the child query happens to have no matches
>   * We have checks that user didn't accidentally pass a child query
> that matches parent docs ... they are only assertions today but I
> think they should be real checks since it's easy to screw up



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9530) Add an Atomic Update Processor

2016-09-19 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503269#comment-15503269
 ] 

Noble Paul commented on SOLR-9530:
--

Adding to [~arafalov] I would say all URPs should be able to optionally accept 
request params and they must be available all the time. This will free up users 
from unnecessarily mucking up with solrconfig.xml

example:
{code}
 /update?preprocessor=atomic=my_new_field 
{code}

> Add an Atomic Update Processor 
> ---
>
> Key: SOLR-9530
> URL: https://issues.apache.org/jira/browse/SOLR-9530
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> I'd like to explore the idea of adding a new update processor to help ingest 
> partial updates.
> Example use-case - There are two datasets with a common id field. How can I 
> merge both of them at index time?
> Proposed Solution: 
> {code}
> 
>   
> add
>   
>   
>   
> 
> {code}
> So the first JSON dump could be ingested against 
> {{http://localhost:8983/solr/gettingstarted/update/json}}
> And then the second JSON could be ingested against
> {{http://localhost:8983/solr/gettingstarted/update/json?processor=atomic}}
> The Atomic Update Processor could support all the atomic update operations 
> currently supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9502) All writers should automatically write MapSerializable as Map

2016-09-19 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-9502.
--
   Resolution: Fixed
Fix Version/s: 6.3

> All writers should automatically write MapSerializable as Map
> -
>
> Key: SOLR-9502
> URL: https://issues.apache.org/jira/browse/SOLR-9502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.3
>
> Attachments: SOLR-9502.patch
>
>
> Move the MapSerializable class to {{o.a.s.common}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9502) All writers should automatically write MapSerializable as Map

2016-09-19 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-9502:


Assignee: Noble Paul

> All writers should automatically write MapSerializable as Map
> -
>
> Key: SOLR-9502
> URL: https://issues.apache.org/jira/browse/SOLR-9502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 6.3
>
> Attachments: SOLR-9502.patch
>
>
> Move the MapSerializable class to {{o.a.s.common}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9502) All writers should automatically write MapSerializable as Map

2016-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503240#comment-15503240
 ] 

ASF subversion and git services commented on SOLR-9502:
---

Commit 1a3bacfc0f55fba0a00fbc03eb49cd19f68167f2 in lucene-solr's branch 
refs/heads/branch_6x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1a3bacf ]

SOLR-9502: ResponseWriters should natively support MapSerializable


> All writers should automatically write MapSerializable as Map
> -
>
> Key: SOLR-9502
> URL: https://issues.apache.org/jira/browse/SOLR-9502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9502.patch
>
>
> Move the MapSerializable class to {{o.a.s.common}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9502) All writers should automatically write MapSerializable as Map

2016-09-19 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503234#comment-15503234
 ] 

ASF subversion and git services commented on SOLR-9502:
---

Commit 1e18c12c19ea89469f5a27ffb0683d681b8c9d72 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1e18c12 ]

SOLR-9502: ResponseWriters should natively support MapSerializable


> All writers should automatically write MapSerializable as Map
> -
>
> Key: SOLR-9502
> URL: https://issues.apache.org/jira/browse/SOLR-9502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
> Attachments: SOLR-9502.patch
>
>
> Move the MapSerializable class to {{o.a.s.common}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9528) Make _docid_ (lucene id) a pseudo field

2016-09-19 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503230#comment-15503230
 ] 

Yonik Seeley commented on SOLR-9528:


-1 to changing the name.

\_docid\_ has been around forever (at least since 2010), and there's a high bar 
for breaking back compat.  It's a major source of frustration for users.  
Additionally, I've never actually seen anyone that has run into \_docid\_ 
confuse it with anything else.  If people didn't read the docs carefully, they 
would be just as likely to fall into a trap of considering "docnum" to be 
persistent (why wouldn't it be? it's the document number)

new meme: "hypothetical confusion considered harmful" ;-)

> Make _docid_ (lucene id) a pseudo field
> ---
>
> Key: SOLR-9528
> URL: https://issues.apache.org/jira/browse/SOLR-9528
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>
> Lucene document id is a transitory id that cannot be relied on as it can 
> change on document updates, etc.
> However, there are circumstances where it could be useful to use it in a 
> search. The primarily use is a debugging where some error messages provide 
> only lucene document id as the reference. For example:
> {noformat}
> child query must only match non-parent docs, but parent docID=38200 matched 
> childScorer=class org.apache.lucene.search.DisjunctionSumScorer
> {noformat}
> We already expose the lucene id with \[docid] transformer with \_docid_ 
> sorting.
> On the email list, [~yo...@apache.org] proposed that _docid_ should be a 
> legitimate pseudo-field, which would make it returnable, usable in function 
> queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9531) QueryElevation component parametric field as doc IdField

2016-09-19 Thread Alessandro Benedetti (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro Benedetti updated SOLR-9531:
---
Description: 
Currently the queryElevation component will elevate specific documents matching 
the Id provided in input.
This is generally correct as we need to be sure the ID we boost is unique.

This can be potentially problematic when used with the field collapsing.

Specifically after we collapsed on fieldA, the collapsed results will have a 
unique value on fieldA.

This issue is to allow the flexibility, when necessary to elevate documents 
based on a different unique field instead of the primary key.

e.g.
In the index we store products by different suppliers.
Each document has:
 the unique Id :  
 the Id of the product : 

After collapsing on productId, productId will become unique and a good 
candidate for the queryElevation component.


- This issue will implement an additional request parameter for the 
queryElevation component : idField
The code will then be changed to be parametric.
I will take a look to the code, not sure if it is possible.

User responsibility will be to provide idField which make sense.

  was:
Currently the queryElevation component will elevate specific documents matching 
the Id provided in input.
This is generally correct as we need to be sure the ID we boost is unique.

This can be potentially problematic when used with the field collapsing.

Specifically after we collapsed on fieldA, the collapsed results will have a 
unique value on fieldA.

This issue is to allow the flexibility, when necessary to elevate documents 
based on a different unique field instead of the primary key.

e.g.
In the index we store products by different suppliers.
Each document has:
 the unique Id :  
 the Id of the product : 

After collapsing on productId, productId will become unique and a good 
candidate for the queryElevation component.


- This issue will implement an additional request parameter for the 
queryElevation component : idField
The code will then be changed to be parametric ( quite a simple change) .

User responsibility will be to provide idField which make sense.


> QueryElevation component parametric field as doc IdField
> 
>
> Key: SOLR-9531
> URL: https://issues.apache.org/jira/browse/SOLR-9531
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
>Reporter: Alessandro Benedetti
>  Labels: component, elevation, query
>
> Currently the queryElevation component will elevate specific documents 
> matching the Id provided in input.
> This is generally correct as we need to be sure the ID we boost is unique.
> This can be potentially problematic when used with the field collapsing.
> Specifically after we collapsed on fieldA, the collapsed results will have a 
> unique value on fieldA.
> This issue is to allow the flexibility, when necessary to elevate documents 
> based on a different unique field instead of the primary key.
> e.g.
> In the index we store products by different suppliers.
> Each document has:
>  the unique Id :  
>  the Id of the product : 
> After collapsing on productId, productId will become unique and a good 
> candidate for the queryElevation component.
> - This issue will implement an additional request parameter for the 
> queryElevation component : idField
> The code will then be changed to be parametric.
> I will take a look to the code, not sure if it is possible.
> User responsibility will be to provide idField which make sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4787) Join Contrib

2016-09-19 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503138#comment-15503138
 ] 

Mikhail Khludnev commented on SOLR-4787:


Note: adding {{score=none}} as a local parameter can speedup join on extremely 
large indexes. 

> Join Contrib
> 
>
> Key: SOLR-4787
> URL: https://issues.apache.org/jira/browse/SOLR-4787
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.2.1
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 6.0
>
> Attachments: SOLR-4787-deadlock-fix.patch, 
> SOLR-4787-pjoin-long-keys.patch, SOLR-4787-with-testcase-fix.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4797-hjoin-multivaluekeys-nestedJoins.patch, 
> SOLR-4797-hjoin-multivaluekeys-trunk.patch
>
>
> This contrib provides a place where different join implementations can be 
> contributed to Solr. This contrib currently includes 3 join implementations. 
> The initial patch was generated from the Solr 4.3 tag. Because of changes in 
> the FieldCache API this patch will only build with Solr 4.2 or above.
> *HashSetJoinQParserPlugin aka hjoin*
> The hjoin provides a join implementation that filters results in one core 
> based on the results of a search in another core. This is similar in 
> functionality to the JoinQParserPlugin but the implementation differs in a 
> couple of important ways.
> The first way is that the hjoin is designed to work with int and long join 
> keys only. So, in order to use hjoin, int or long join keys must be included 
> in both the to and from core.
> The second difference is that the hjoin builds memory structures that are 
> used to quickly connect the join keys. So, the hjoin will need more memory 
> then the JoinQParserPlugin to perform the join.
> The main advantage of the hjoin is that it can scale to join millions of keys 
> between cores and provide sub-second response time. The hjoin should work 
> well with up to two million results from the fromIndex and tens of millions 
> of results from the main query.
> The hjoin supports the following features:
> 1) Both lucene query and PostFilter implementations. A *"cost"* > 99 will 
> turn on the PostFilter. The PostFilter will typically outperform the Lucene 
> query when the main query results have been narrowed down.
> 2) With the lucene query implementation there is an option to build the 
> filter with threads. This can greatly improve the performance of the query if 
> the main query index is very large. The "threads" parameter turns on 
> threading. For example *threads=6* will use 6 threads to build the filter. 
> This will setup a fixed threadpool with six threads to handle all hjoin 
> requests. Once the threadpool is created the hjoin will always use it to 
> build the filter. Threading does not come into play with the PostFilter.
> 3) The *size* local parameter can be used to set the initial size of the 
> hashset used to perform the join. If this is set above the number of results 
> from the fromIndex then the you can avoid hashset resizing which improves 
> performance.
> 4) Nested filter queries. The local parameter "fq" can be used to nest a 
> filter query within the join. The nested fq will filter the results of the 
> join query. This can point to another join to support nested joins.
> 5) Full caching support for the lucene query implementation. The filterCache 
> and queryResultCache should work properly even with deep nesting of joins. 
> Only the queryResultCache comes into play with the PostFilter implementation 
> because PostFilters are not cacheable in the filterCache.
> The syntax of the hjoin is similar to the JoinQParserPlugin except that the 
> plugin is referenced by the string "hjoin" rather then "join".
> fq=\{!hjoin fromIndex=collection2 from=id_i to=id_i threads=6 
> fq=$qq\}user:customer1=group:5
> The example filter query above will search the fromIndex (collection2) for 
> "user:customer1" applying the local fq parameter to filter the results. The 
> lucene filter query will be built using 6 threads. This query will generate a 
> list of values from the "from" field that will be used to filter the main 
> query. Only records from the main query, where the "to" field is present in 
> the "from" list will be included in the results.
> The solrconfig.xml in the main query core must contain the reference to the 
> hjoin.
>  class="org.apache.solr.joins.HashSetJoinQParserPlugin"/>
> And the join contrib lib jars must be registed in the solrconfig.xml.
>  
> After issuing the "ant dist" command from inside the solr directory 

[jira] [Commented] (SOLR-9532) BoolField always False when using shards

2016-09-19 Thread Gidon Junge (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503120#comment-15503120
 ] 

Gidon Junge commented on SOLR-9532:
---

It does seem to be connected.

> BoolField always False when using shards
> 
>
> Key: SOLR-9532
> URL: https://issues.apache.org/jira/browse/SOLR-9532
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
> Environment: Ubuntu
>Reporter: Gidon Junge
>Priority: Blocker
>
> After upgrading from Solr 5.5 to 6.2 I've encountered the following issue:
> If my documents contains BoolField they will be False no matter the value 
> when I use sharding
> Solr 5.5:
> http://solr5:8983/solr/bug/select?q=*%3a*
> EQUALS the response from
> http://solr5:8983/solr/bug/select?shards=solr5%3a8983%2fsolr%2fbug=*%3a*
> Yet in Solr 6.2:
> http://solr6:8983/solr/bug/select?q=*%3a*
> Does NOT EQUALS the response from:
> http://solr6:8983/solr/bug/select?shards=solr6%3a8983%2fsolr%2fbug=*%3a*
> Schema used in both cases:
> 
> 
> 
>  sortMissingLast="true" omitNorms="true"/>
> 
> 
> omitNorms="true" positionIncrementGap="0"/>
>  omitNorms="true" positionIncrementGap="0"/>
>  omitNorms="true" positionIncrementGap="0"/>
>  precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
>  omitNorms="true"/>
> 
> 
> 
>  stored="true" required="true" />
>  stored="false"/>
>   omitNorms="true" default="false"/>
>  omitNorms="true" default="false"/>
> omitNorms="true" default="true"/>
>  omitNorms="true" default="true"/>
> 
> id
> id
> 
>
> 
> TEST DOCUMENTS:
> [
> {
> "id": "demo1-empty"
> },
> {
> "id": "demo2-alltrue",
> "boolean1": "true",
> "boolean2": "true",
> "boolean3": "true",
> "boolean4": "true"
> },
> {
> "id": "demo3-alldefault",
> "boolean1": "false",
> "boolean2": "false",
> "boolean3": "true",
> "boolean4": "true"
> },
> {
> "id": "demo4-allopposite",
> "boolean1": "true",
> "boolean2": "true",
> "boolean3": "false",
> "boolean4": "false"
> }
> ]
> Correct Response received in Solr 5.5 and without shards in 6.2:
>  
> 
> demo1-empty
> 1545889354915774464
> false
> true
> false
> true
> 
> 
> demo2-alltrue
> true
> true
> true
> true
> 1545889381586305024
> 
> 
> demo3-alldefault
> false
> false
> true
> true
> 1545889419038294016
> 
> 
> demo4-allopposite
> true
> true
> false
> false
> 1545889444919246848
> 
> 
> Incorrect response received with shards parameter in Solr 6.2
> 
> 
> demo1-empty
> 1545889354915774464
> false
> false
> false
> false
> 
> 
> demo2-alltrue
> false
> false
> false
> false
> 1545889381586305024
> 
> 
> demo3-alldefault
> false
> false
> false
> false
> 1545889419038294016
> 
> 
> demo4-allopposite
> false
> false
> false
> false
> 1545889444919246848
> 
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9532) BoolField always False when using shards

2016-09-19 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503106#comment-15503106
 ] 

Alexandre Rafalovitch commented on SOLR-9532:
-

Could this be connected to SOLR-9490?

> BoolField always False when using shards
> 
>
> Key: SOLR-9532
> URL: https://issues.apache.org/jira/browse/SOLR-9532
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
> Environment: Ubuntu
>Reporter: Gidon Junge
>Priority: Blocker
>
> After upgrading from Solr 5.5 to 6.2 I've encountered the following issue:
> If my documents contains BoolField they will be False no matter the value 
> when I use sharding
> Solr 5.5:
> http://solr5:8983/solr/bug/select?q=*%3a*
> EQUALS the response from
> http://solr5:8983/solr/bug/select?shards=solr5%3a8983%2fsolr%2fbug=*%3a*
> Yet in Solr 6.2:
> http://solr6:8983/solr/bug/select?q=*%3a*
> Does NOT EQUALS the response from:
> http://solr6:8983/solr/bug/select?shards=solr6%3a8983%2fsolr%2fbug=*%3a*
> Schema used in both cases:
> 
> 
> 
>  sortMissingLast="true" omitNorms="true"/>
> 
> 
> omitNorms="true" positionIncrementGap="0"/>
>  omitNorms="true" positionIncrementGap="0"/>
>  omitNorms="true" positionIncrementGap="0"/>
>  precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
>  omitNorms="true"/>
> 
> 
> 
>  stored="true" required="true" />
>  stored="false"/>
>   omitNorms="true" default="false"/>
>  omitNorms="true" default="false"/>
> omitNorms="true" default="true"/>
>  omitNorms="true" default="true"/>
> 
> id
> id
> 
>
> 
> TEST DOCUMENTS:
> [
> {
> "id": "demo1-empty"
> },
> {
> "id": "demo2-alltrue",
> "boolean1": "true",
> "boolean2": "true",
> "boolean3": "true",
> "boolean4": "true"
> },
> {
> "id": "demo3-alldefault",
> "boolean1": "false",
> "boolean2": "false",
> "boolean3": "true",
> "boolean4": "true"
> },
> {
> "id": "demo4-allopposite",
> "boolean1": "true",
> "boolean2": "true",
> "boolean3": "false",
> "boolean4": "false"
> }
> ]
> Correct Response received in Solr 5.5 and without shards in 6.2:
>  
> 
> demo1-empty
> 1545889354915774464
> false
> true
> false
> true
> 
> 
> demo2-alltrue
> true
> true
> true
> true
> 1545889381586305024
> 
> 
> demo3-alldefault
> false
> false
> true
> true
> 1545889419038294016
> 
> 
> demo4-allopposite
> true
> true
> false
> false
> 1545889444919246848
> 
> 
> Incorrect response received with shards parameter in Solr 6.2
> 
> 
> demo1-empty
> 1545889354915774464
> false
> false
> false
> false
> 
> 
> demo2-alltrue
> false
> false
> false
> false
> 1545889381586305024
> 
> 
> demo3-alldefault
> false
> false
> false
> false
> 1545889419038294016
> 
> 
> demo4-allopposite
> false
> false
> false
> false
> 1545889444919246848
> 
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >