[jira] [Updated] (LUCENE-8150) Remove references to segments.gen.

2018-01-31 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8150:
-
Attachment: LUCENE-8150.patch

> Remove references to segments.gen.
> --
>
> Key: LUCENE-8150
> URL: https://issues.apache.org/jira/browse/LUCENE-8150
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-8150.patch
>
>
> This was the way we wrote pending segment files before we switch to 
> {{pending_segments_N}} in LUCENE-5925.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8150) Remove references to segments.gen.

2018-01-31 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8150:


 Summary: Remove references to segments.gen.
 Key: LUCENE-8150
 URL: https://issues.apache.org/jira/browse/LUCENE-8150
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
 Fix For: master (8.0)


This was the way we wrote pending segment files before we switch to 
{{pending_segments_N}} in LUCENE-5925.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.1) - Build # 432 - Still Unstable!

2018-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/432/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC

10 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_480789F1C3CCFB7B-001\4.2.0-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_480789F1C3CCFB7B-001\4.2.0-nocfs-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_480789F1C3CCFB7B-001\7.0.1-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_480789F1C3CCFB7B-001\7.0.1-cfs-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_480789F1C3CCFB7B-001\6.1.0-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_480789F1C3CCFB7B-001\6.1.0-nocfs-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_480789F1C3CCFB7B-001\4.2.0-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_480789F1C3CCFB7B-001\4.2.0-nocfs-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_480789F1C3CCFB7B-001\7.0.1-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_480789F1C3CCFB7B-001\7.0.1-cfs-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_480789F1C3CCFB7B-001\6.1.0-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_480789F1C3CCFB7B-001\6.1.0-nocfs-001

at __randomizedtesting.SeedInfo.seed([480789F1C3CCFB7B]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.lucene.search.suggest.analyzing.TestFreeTextSuggester.testIllegalByteDuringBuild

Error Message:
Unexpected exception type, expected IllegalArgumentException but got 
java.io.IOException: Could not remove the following files (in the order of 
attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\suggest\test\J0\.\temp\FreeTextSuggester.index.13979442008226260495:
 java.nio.file.DirectoryNotEmptyException: 
.\temp\FreeTextSuggester.index.13979442008226260495 

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception type, expected 
IllegalArgumentException but got java.io.IOException: Could not remove the 
following files (in the order of attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\suggest\test\J0\.\temp\FreeTextSuggester.index.13979442008226260495:
 java.nio.file.DirectoryNotEmptyException: 
.\temp\FreeTextSuggester.index.13979442008226260495

at 

[jira] [Commented] (LUCENE-8149) Document why NRTCachingDirectory needs to preemptively delete segment files.

2018-01-31 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348144#comment-16348144
 ] 

Adrien Grand commented on LUCENE-8149:
--

I suspect NRTCachingDirectory is a bit outdated. Those deletes should not be 
necessary since we never overwrite files? Extensions of {{FSDirectory}} like 
{{MMapDirectory}} or {{NIOFSDirectory}} would actually fail in such a case due 
to the {{StandardOpenOption.CREATE_NEW}} flag.


> Document why NRTCachingDirectory needs to preemptively delete segment files.
> 
>
> Key: LUCENE-8149
> URL: https://issues.apache.org/jira/browse/LUCENE-8149
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> Moving over here from SOLR-11892. After getting through my confusion I've 
> found the following: NRTCachingDirectory.createOutput tries to delete segment 
> files before creating them. We should at least add a bit of commentary as to 
> what the pre-emptive delete is there for since on the surface it's not 
> obvious.
> try {
>   in.deleteFile(name);
> {{ } catch (IOException ioe) {}}
>   // This is fine: file may not exist
> {{ }}}
> If I change to using MMapDirectory or NIOFSDirectory these exceptions are not 
> thrown. What's special about NRTCachingDirectory that it needs this when two 
> of the possible underlying FS implementations apparently do not? Or is this 
> necessary for, say, Windows or file systems than the two I tried? Or is it 
> some interaction between the RAM based segments and segments on disk?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2018-01-31 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348111#comment-16348111
 ] 

Gus Heck commented on SOLR-11934:
-

First off, let me say I LIKE the heavy logging. When the $#!7 hits the fan, if 
someone tells me when it went wrong, with a default Solr install I have 
something to dig into to find out what went wrong. Often the answer is 
something like: "because you had two mutually exclusive filters selected" or 
"the record was added but not committed yet" or "That's because the system 
paused for a stop the world GC for 40 minutes" or "That looks like a pagination 
bug in the UI, they added a filter but didn't reset the pagination" or "the 
term you're searching for is in a field that's not indexed" Many of these 
things rely on knowing *exactly* what the query or order of events was from 
*Solr's* perspective (particularly vs what the customer +_thought_+ they sent 
for a query).

The case of a "very large system" where logging is burdensome is IMHO an edge 
case. Perhaps some sort of ready made profiles for this edge case should exist 
but there are almost always a whole lot more little fish than big fish in the 
sea.

That said, yes, the level of logging should be appropriate.  Since you asked 
for opinions, here's my opinion on levels:
 * FATAL - worthy of immediate attention (beeper, SMS, whatever),  zoo keeper 
unreachable, system fails to come up etc.
 * ERROR - Something serious, should be looked at in the morning or at least 
some time soon, may be prelude to FATAL showing up, or high likelihood of 
customer bug report, very rarely something that can be ignored.
 * WARN - Something questionable, possible backlog ticket, possibly ignore/turn 
off in some cases.
 * INFO - that which is likely to be useful figure out what led up to the ERROR 
that woke you at 3am... Also that which will be helpful in trouble shooting 
customer bug reports (yesterday at we got this empty result when there were 
definitely documents that should have matchedturns out to be a pagination 
bug). This definitely includes queries, updates, admin commands, gc logs etc.
 * DEBUG - That which might help in trouble shooting or finding a bug, or 
trouble shooting ane odd behavior. Rarely by admins, usually by devs
 * TRACE - Stuff only ever turned on by devs missing tufts of hair trying to 
figure out things like "what order did those 2 threads run X and Y in"

Possibly a separate ticket but related: Breaking things out into separate files 
with potentially different rotation frequencies, verbosities etc. seems like a 
decent idea. I only rarely want to see admin/updates/queries together, and none 
of those alongside system logging... GC is already separated, which is 
excellent. IMHO Turning off logging of queries/updates/whatever up front is 
premature optimization and presents a usability issue. Splitting things into 
more focused files probably helps most folks (except those who have been 
enjoying the firehose with log analysis tools I suppose) but that's just my 
opinion. If things are broken up it should be possible to restore the firehose 
for back compatibility.

The lack of query/update logs out of the box is one of the things that 
irritates me about Elastic FWIW.

Obviously it would be a different ticket, but maybe some additional admin UI 
controls  to augment the huge tree of loggers would be good? I'm thinking broad 
presets for default/quiet/verbose for each type of log. 

 

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 1272 - Still Unstable!

2018-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1272/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.legacy.facet.LegacyRangeFacetCloudTest

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.analytics.legacy.facet.LegacyRangeFacetCloudTest: 1) 
Thread[id=41, name=qtp1909419050-41, state=TIMED_WAITING, 
group=TGRP-LegacyRangeFacetCloudTest] at 
java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)2) 
Thread[id=147, name=qtp1909419050-147, state=TIMED_WAITING, 
group=TGRP-LegacyRangeFacetCloudTest] at 
java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.analytics.legacy.facet.LegacyRangeFacetCloudTest: 
   1) Thread[id=41, name=qtp1909419050-41, state=TIMED_WAITING, 
group=TGRP-LegacyRangeFacetCloudTest]
at java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)
   2) Thread[id=147, name=qtp1909419050-147, state=TIMED_WAITING, 
group=TGRP-LegacyRangeFacetCloudTest]
at java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([55A736C440B6F3BE]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.legacy.facet.LegacyRangeFacetCloudTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=41, 
name=qtp1909419050-41, state=TIMED_WAITING, 
group=TGRP-LegacyRangeFacetCloudTest] at 
java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
 at 

[jira] [Commented] (LUCENE-8148) Get precommit Lint warnings out of test code

2018-01-31 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348066#comment-16348066
 ] 

Robert Muir commented on LUCENE-8148:
-

Wouldn't it be more prudent to first fix the non-test code?

We still don't even fail on compiler warnings, which is the most very basic, 
step zero, of static analysis (I feel inclined to bring this up on every one of 
these lets do fancy XYZ static analysis issues, because its so sad).

> Get precommit Lint warnings out of test code
> 
>
> Key: LUCENE-8148
> URL: https://issues.apache.org/jira/browse/LUCENE-8148
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> Mostly putting this up for discussion. I'm starting to work on Solr test lint 
> warnings, it seems right to break the Lucene changes and Solr changes into 
> separate JIRAs.
> First of all, do people have objections to me mucking around in the Lucene 
> test code to do this? The eventual goal here is to get to the point where we 
> can turn on precommit failures on lint warnings. Deprecations maybe as well, 
> but that's a separate issue, as is non-test code.
> I expect to see a lot of pretty safe issues, then a series I'm not sure of, 
> I'll ask when I find them if I wind up carrying this forward. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7148 - Still Unstable!

2018-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7148/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC

10 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestLongPostings

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestLongPostings_98258A9110904AF7-001\longpostings.1603048714465031682-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestLongPostings_98258A9110904AF7-001\longpostings.1603048714465031682-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestLongPostings_98258A9110904AF7-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestLongPostings_98258A9110904AF7-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestLongPostings_98258A9110904AF7-001\longpostings.1603048714465031682-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestLongPostings_98258A9110904AF7-001\longpostings.1603048714465031682-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestLongPostings_98258A9110904AF7-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestLongPostings_98258A9110904AF7-001

at __randomizedtesting.SeedInfo.seed([98258A9110904AF7]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestMmapDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_98258A9110904AF7-001\tempDir-003:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_98258A9110904AF7-001\tempDir-003

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_98258A9110904AF7-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_98258A9110904AF7-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_98258A9110904AF7-001\tempDir-003:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_98258A9110904AF7-001\tempDir-003
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_98258A9110904AF7-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_98258A9110904AF7-001

at __randomizedtesting.SeedInfo.seed([98258A9110904AF7]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1465 - Still Failing

2018-01-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1465/

23 tests failed.
FAILED:  
org.apache.lucene.classification.CachingNaiveBayesClassifierTest.testPerformance

Error Message:
evaluation took more than 1m: 61s

Stack Trace:
java.lang.AssertionError: evaluation took more than 1m: 61s
at 
__randomizedtesting.SeedInfo.seed([EEA69EE06AE0A753:29476CC201549FFC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.lucene.classification.CachingNaiveBayesClassifierTest.testPerformance(CachingNaiveBayesClassifierTest.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.AliasIntegrationTest.testModifyMetadataV1

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([961BD137E449B14B:477B6FFEC4E5A212]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.AliasIntegrationTest.checkFooAndBarMeta(AliasIntegrationTest.java:283)
at 
org.apache.solr.cloud.AliasIntegrationTest.testModifyMetadataV1(AliasIntegrationTest.java:251)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 

[jira] [Resolved] (SOLR-11892) Avoid unnecessary exceptions in FSDirectory and RAMDirectory

2018-01-31 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-11892.
---
Resolution: Invalid

Moved over to Lucene where it properly belongs.

> Avoid unnecessary exceptions in FSDirectory and RAMDirectory
> 
>
> Key: SOLR-11892
> URL: https://issues.apache.org/jira/browse/SOLR-11892
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: Screen Shot 2018-01-24 at 9.09.55 PM.png, Screen Shot 
> 2018-01-24 at 9.10.47 PM.png
>
>
> In privateDeleteFile, just use deleteIfExists.
> in RamDirectory we can declare a static exception and create it once.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8149) Document why NRTCachingDirectory needs to preemptively delete segment files.

2018-01-31 Thread Erick Erickson (JIRA)
Erick Erickson created LUCENE-8149:
--

 Summary: Document why NRTCachingDirectory needs to preemptively 
delete segment files.
 Key: LUCENE-8149
 URL: https://issues.apache.org/jira/browse/LUCENE-8149
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Erick Erickson
Assignee: Erick Erickson


Moving over here from SOLR-11892. After getting through my confusion I've found 
the following: NRTCachingDirectory.createOutput tries to delete segment files 
before creating them. We should at least add a bit of commentary as to what the 
pre-emptive delete is there for since on the surface it's not obvious.

try {
  in.deleteFile(name);
{{ } catch (IOException ioe) {}}
  // This is fine: file may not exist
{{ }}}

If I change to using MMapDirectory or NIOFSDirectory these exceptions are not 
thrown. What's special about NRTCachingDirectory that it needs this when two of 
the possible underlying FS implementations apparently do not? Or is this 
necessary for, say, Windows or file systems than the two I tried? Or is it some 
interaction between the RAM based segments and segments on disk?




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8148) Get precommit Lint warnings out of test code

2018-01-31 Thread Erick Erickson (JIRA)
Erick Erickson created LUCENE-8148:
--

 Summary: Get precommit Lint warnings out of test code
 Key: LUCENE-8148
 URL: https://issues.apache.org/jira/browse/LUCENE-8148
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Erick Erickson
Assignee: Erick Erickson


Mostly putting this up for discussion. I'm starting to work on Solr test lint 
warnings, it seems right to break the Lucene changes and Solr changes into 
separate JIRAs.

First of all, do people have objections to me mucking around in the Lucene test 
code to do this? The eventual goal here is to get to the point where we can 
turn on precommit failures on lint warnings. Deprecations maybe as well, but 
that's a separate issue, as is non-test code.

I expect to see a lot of pretty safe issues, then a series I'm not sure of, 
I'll ask when I find them if I wind up carrying this forward. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10809) Get precommit lint warnings out of Solr test code

2018-01-31 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-10809:
--
Summary: Get precommit lint warnings out of Solr test code  (was: Get 
precommit lint warnings out of test code)

> Get precommit lint warnings out of Solr test code
> -
>
> Key: SOLR-10809
> URL: https://issues.apache.org/jira/browse/SOLR-10809
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> There are over 300 precommit WARNING messages in test files. Getting them out 
> will largely be rote maintenance. See SOLR-10778.
> If anyone is going to help here, please create a sub-task first identifying 
> what you're going to work on. Suggestion: Take a particular test _directory_, 
> e.g.
> lucene/analysis/common/src/test/org/apache/lucene/analysis/ and create a 
> sub-jira linked back to this one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11775) json.facet can use inconsistent Long/Integer for "count" depending on shard count

2018-01-31 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348012#comment-16348012
 ] 

Yonik Seeley commented on SOLR-11775:
-

facet.field faceting does return an integer in distrib mode as well, but client 
code should not rely on getting an integer vs a long given that distributed 
search & faceting supports greater than 2B docs.

> json.facet can use inconsistent Long/Integer for "count" depending on shard 
> count
> -
>
> Key: SOLR-11775
> URL: https://issues.apache.org/jira/browse/SOLR-11775
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Hoss Man
>Priority: Major
>
> (NOTE: I noticed this while working on a test for {{type: range}} but it's 
> possible other facet types may be affected as well)
> When dealing with a single core request -- either standalone or a collection 
> with only one shard -- json.facet seems to use "Integer" objects to return 
> the "count" of facet buckets, however if the shard count is increased then 
> the end client gets a "Long" object for the "count"
> (This isn't noticable when using {{wt=json}} but can be very problematic when 
> trying to write client code using {{wt=xml}} or SolrJ



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+41) - Build # 21373 - Still Unstable!

2018-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21373/
Java: 64bit/jdk-10-ea+41 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testMetricTrigger

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([2B791F8AD696B655:91752805897E601A]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.junit.Assert.assertNull(Assert.java:562)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testMetricTrigger(TriggerIntegrationTest.java:1575)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosVersion

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 

[jira] [Commented] (SOLR-11739) Solr can accept duplicated async IDs

2018-01-31 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348004#comment-16348004
 ] 

Gus Heck commented on SOLR-11739:
-

[~hossman], To me it seems that with your 3 command example things are a little 
worse than you say... There would be a window where id 222 could be seen as the 
success of CREATESOMETHING, and someone checking on DOWHATEVER might think 
DOWHATEVER had been done successfully (yay, go home, throw a party... ) but 
then DOWHATEVER fails (Monday's gonna be less fun...), and then some automated 
process checks on 222 to verify that it did actually CREATESOMETHING, but sees 
a failure... (drat, do it again... and again and again.. continually failing 
because SOMETHING now exists). 

Sure, it's their fault for not coordinating their ID's but... why help them 
make that mistake?

I think any ID that is not unique is more or less useless. I haven't used async 
requests and haven't previously paid much attention to it, don't know the 
history, and I might be missing something, but I find it shocking that Solr is 
not generating the id and ensuring it's uniqueness. How about when the Overseer 
is elected, it establishes a source of entropy (Random initialized from time) 
and uses that to issue UUID's. There's only one overseer at a time, and the 
cases where 2 or more overseers are started at exactly the same time and 
coexist is a bug, right? If there's no overseer, commands can't be run anyway...

> Solr can accept duplicated async IDs
> 
>
> Key: SOLR-11739
> URL: https://issues.apache.org/jira/browse/SOLR-11739
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-11739.patch, SOLR-11739.patch
>
>
> Solr is supposed to reject duplicated async IDs, however, if the repeated IDs 
> are sent fast enough, a race condition in Solr will let the repeated IDs 
> through. The duplicated task is ran and and then silently fails to report as 
> completed because the same async ID is already in the completed map. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 420 - Still Unstable!

2018-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/420/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.solr.cloud.AliasIntegrationTest.testModifyMetadataV2

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([FB172D5D63C9D5B9:76BA3FEF7D7AFD22]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.AliasIntegrationTest.checkFooAndBarMeta(AliasIntegrationTest.java:283)
at 
org.apache.solr.cloud.AliasIntegrationTest.testModifyMetadataV2(AliasIntegrationTest.java:237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 

[jira] [Commented] (SOLR-10809) Get precommit lint warnings out of test code

2018-01-31 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347999#comment-16347999
 ] 

Erick Erickson commented on SOLR-10809:
---

[~ysee...@gmail.com] Any objection to removing Closeble from DocSetBase? I'm 
looking at precommit warnings in test code and this is one. This is the code, 
and if the off-heap idea isn't going anywhere soon it seems like it could be 
removed:
{{  /** FUTURE: for off-heap */
  @Override
  public void close() throws IOException {
  }
}}

> Get precommit lint warnings out of test code
> 
>
> Key: SOLR-10809
> URL: https://issues.apache.org/jira/browse/SOLR-10809
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> There are over 300 precommit WARNING messages in test files. Getting them out 
> will largely be rote maintenance. See SOLR-10778.
> If anyone is going to help here, please create a sub-task first identifying 
> what you're going to work on. Suggestion: Take a particular test _directory_, 
> e.g.
> lucene/analysis/common/src/test/org/apache/lucene/analysis/ and create a 
> sub-jira linked back to this one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10778) Ant precommit task WARNINGS about unclosed resources

2018-01-31 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-10778:
-

Assignee: Erick Erickson

> Ant precommit task WARNINGS about unclosed resources
> 
>
> Key: SOLR-10778
> URL: https://issues.apache.org/jira/browse/SOLR-10778
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 4.6
>Reporter: Andrew Musselman
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: dated-warnings, dated-warnings.log, notclosed.txt
>
>
> During precommit we are seeing lots of warnings about resources that aren't 
> being closed, which could pose problems based on chat amongst the team. Log 
> snippet for example:
> [mkdir] Created dir: 
> /var/folders/5p/6b46rm_94dzc5m8d4v56tds4gp/T/ecj1165341501
>  [ecj-lint] Compiling 419 source files to 
> /var/folders/5p/6b46rm_94dzc5m8d4v56tds4gp/T/ecj1165341501
>  [ecj-lint] --
>  [ecj-lint] 1. WARNING in 
> /path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttpSolrClient.java
>  (at line 920)
>  [ecj-lint] new LBHttpSolrClient(httpSolrClientBuilder, httpClient, 
> solrServerUrls) :
>  [ecj-lint] 
> ^^^
>  [ecj-lint] Resource leak: '' is never closed
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 2. WARNING in 
> /path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/impl/StreamingBinaryResponseParser.java
>  (at line 49)
>  [ecj-lint] JavaBinCodec codec = new JavaBinCodec() {
>  [ecj-lint]  ^
>  [ecj-lint] Resource leak: 'codec' is never closed
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 3. WARNING in 
> /path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/request/JavaBinUpdateRequestCodec.java
>  (at line 90)
>  [ecj-lint] JavaBinCodec codec = new JavaBinCodec();
>  [ecj-lint]  ^
>  [ecj-lint] Resource leak: 'codec' is never closed
>  [ecj-lint] --
>  [ecj-lint] 4. WARNING in 
> /path/to/lucene-solr/solr/solrj/src/java/org/apache/solr/client/solrj/request/JavaBinUpdateRequestCodec.java
>  (at line 113)
>  [ecj-lint] JavaBinCodec codec = new JavaBinCodec() {
>  [ecj-lint]  ^
>  [ecj-lint] Resource leak: 'codec' is never closed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11766) Ref Guide: redesign Streaming Expression reference pages

2018-01-31 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347976#comment-16347976
 ] 

Gus Heck commented on SOLR-11766:
-

Out of curiosity, is it possible to add javascript widgets, such as a filter 
mechanism for categories of expression, or a text box that culls the boxes on 
the quick ref page as you type? Obviously this doesn't work in the PDF world, 
but I'm just wondering if stuff like that can be added in a a way that doesn't 
hamper the PDF? I notice that in the deployed online html version there's a 
search box, though (ironically for this project) it's limited to titles of 
sections and doesn't do full text search. However that seems to say that it's a 
javascript widget... 

> Ref Guide: redesign Streaming Expression reference pages
> 
>
> Key: SOLR-11766
> URL: https://issues.apache.org/jira/browse/SOLR-11766
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, streaming expressions
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Attachments: Stream-collapsed-panels.png, StreamQuickRef-sample.png, 
> Streaming-expanded-panel.png
>
>
> There are a very large number of streaming expressions and they need some 
> special info design to be more easily accessible. The current way we're 
> presenting them doesn't really work. This issue is to track ideas and POC 
> patches for possible approaches.
> A couple of ideas I have, which may or may not all work together:
> # Provide a way to filter the list of commands by expression type (would need 
> to figure out the types)
> # Present the available expressions in smaller sections, similar in UX 
> concept to https://redis.io/commands. On that page, I can see 9-12 commands 
> above "the fold" on my laptop screen, as compared to today when I can see 
> only 1 expression at a time & each expression probably takes more space than 
> necessary. This idea would require figuring out where people go when they 
> click a command to get more information.
> ## One solution for where people go is to put all the commands back in one 
> massive page, but this isn't really ideal
> ## Another solution would be to have an individual .adoc file for each 
> expression and present them all individually.
> # Some of the Bootstrap.js options may help - collapsing panels or tabs, if 
> properly designed, may make it easier to see an overview of available 
> expressions and get more information if interested.
> I'll post more ideas as I come up with them.
> These ideas focus on the HTML layout of expressions - ideally we come up with 
> a solution for PDF that's better also, but we are much more limited in what 
> we can do there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11934) Visit Solr logging, it's too noisy.

2018-01-31 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-11934:
-

 Summary: Visit Solr logging, it's too noisy.
 Key: SOLR-11934
 URL: https://issues.apache.org/jira/browse/SOLR-11934
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson
Assignee: Erick Erickson


I think we have way too much INFO level logging. Or, perhaps more correctly, 
Solr logging needs to be examined and messages logged at an appropriate level.

We log every update at an INFO level for instance. But I think we log LIR at 
INFO as well. As a sysadmin I don't care to have my logs polluted with a 
message for every update, but if I'm trying to keep my system healthy I want to 
see LIR messages and try to understand why.

Plus, in large installations logging at INFO level is creating a _LOT_ of files.

What I want to discuss on this JIRA is
1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE levels?
2> Who's the audience at each level? For a running system that's functioning, 
sysops folks would really like WARN messages that mean something need attention 
for instance. If I'm troubleshooting should I turn on INFO? DEBUG? TRACE?

So let's say we get some kind of agreement as to the above. Then I propose 
three things
1> Someone (and probably me but all help gratefully accepted) needs to go 
through our logging and assign appropriate levels. This will take quite a 
while, I intend to work on it in small chunks.
2> Actually answer whether unnecessary objects are created when something like 
log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
independent on the logging implementation used? The SLF4J and log4j seem a bit 
contradictory.
3> Maybe regularize log, logger, LOG as variable names, but that's a nit.

As a tactical approach, I suggest we tag each LoggerFactory.getLogger in files 
we work on with //SOLR-(whatever number is assigned when I create this). We can 
remove them all later, but since I expect to approach this piecemeal it'd be 
nice to keep track of which files have been done already.

Finally, I really really really don't want to do this all at once. There are 
5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
now it would probably span the 7.3 release.

This will probably be an umbrella issue so we can keep all the commits straight 
and people can volunteer to "fix the files in core" as a separate piece of work 
(hint).

There are several existing JIRAs about logging in general, let's link them in 
here as well.

Let the discussion begin!





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11901) Improve how logging collects class name information

2018-01-31 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-11901:
-

Assignee: Erick Erickson

> Improve how logging collects class name information
> ---
>
> Key: SOLR-11901
> URL: https://issues.apache.org/jira/browse/SOLR-11901
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Erick Erickson
>Priority: Minor
>
> The log4j.properties that we ship with Solr has this Pattern
> {code}
> %d{-MM-dd HH:mm:ss.SSS} %-5p (%t) [%X{collection} %X{shard} %X{replica} 
> %X{core}] %c{1.} %m%n
> {code}
> The {{%c}} collects class name information ( more on this 
> http://logging.apache.org/log4j/2.x/manual/async.html#Location ) which 
> creates a throwable ( 
> https://github.com/apache/log4j/blob/trunk/src/main/java/org/apache/log4j/spi/LoggingEvent.java#L253
>  ) and it can be expensive
> Here is the stack trace excerpt from the JFR capture which lead to this issue
> {code}
> org.apache.log4j.spi.LoggingEvent.getLocationInformation()
> org.apache.log4j.helpers.PatternParser$ClassNamePatternConverter.getFullyQualifiedName(LoggingEvent)
> org.apache.log4j.helpers.PatternParser$NamedPatternConverter.convert(LoggingEvent)
> org.apache.log4j.helpers.PatternConverter.format(StringBuffer, LoggingEvent)
> org.apache.log4j.PatternLayout.format(LoggingEvent)
> org.apache.log4j.WriterAppender.subAppend(LoggingEvent)
> org.apache.log4j.RollingFileAppender.subAppend(LoggingEvent)
> ...
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish()
> 214,658 32.42   0
> {code}
> We could remove capturing the class name information from the default config 
> but ideally capturing the classname is useful. So if we can find a way that 
> doesn't need to create a throwable then it's ideal. 
> Here is an interesting read : 
> https://shipilev.net/blog/2014/exceptional-performance/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 1271 - Still Unstable!

2018-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1271/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SyncSliceTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.SyncSliceTest: 1) 
Thread[id=10387, name=qtp28730764-10387, state=TIMED_WAITING, 
group=TGRP-SyncSliceTest] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)  
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.SyncSliceTest: 
   1) Thread[id=10387, name=qtp28730764-10387, state=TIMED_WAITING, 
group=TGRP-SyncSliceTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([C401B50A945DEFFE]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SyncSliceTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=10387, name=qtp28730764-10387, state=TIMED_WAITING, 
group=TGRP-SyncSliceTest] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)  
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=10387, name=qtp28730764-10387, state=TIMED_WAITING, 
group=TGRP-SyncSliceTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([C401B50A945DEFFE]:0)


FAILED:  org.apache.solr.cloud.TestAuthenticationFramework.testBasics

Error Message:
Error from server at 
https://127.0.0.1:33013/solr/testcollection_shard1_replica_n2: Expected mime 
type application/octet-stream but got text/html.Error 404 
Can not find: /solr/testcollection_shard1_replica_n2/update  
HTTP ERROR 404 Problem accessing 
/solr/testcollection_shard1_replica_n2/update. Reason: Can not find: 
/solr/testcollection_shard1_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.8.v20171121  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at https://127.0.0.1:33013/solr/testcollection_shard1_replica_n2: 
Expected mime type application/octet-stream but got text/html. 


Error 404 Can not find: 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1654 - Still Unstable!

2018-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1654/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.AliasIntegrationTest.testModifyMetadataCAR

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([4E2A232F88A7D786:52786264357C77DE]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.AliasIntegrationTest.checkFooAndBarMeta(AliasIntegrationTest.java:283)
at 
org.apache.solr.cloud.AliasIntegrationTest.testModifyMetadataCAR(AliasIntegrationTest.java:262)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeWithMultipleReplicasLost

Error Message:
The operations computed by ComputePlanAction should not be null 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 21372 - Still Unstable!

2018-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21372/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
expected:<5> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<5> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([937EEB2889CAB8F5:FE824FD5338247F2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:274)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 345 - Still Unstable

2018-01-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/345/

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeAddedTriggerRestoreState

Error Message:
The trigger did not fire at all

Stack Trace:
java.lang.AssertionError: The trigger did not fire at all
at 
__randomizedtesting.SeedInfo.seed([70E0E3CC850E755D:F8DD6AB3BFCE94F0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeAddedTriggerRestoreState(TriggerIntegrationTest.java:426)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testCooldown

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([70E0E3CC850E755D:415E8E28FBA400AF]:0)
at org.junit.Assert.fail(Assert.java:92)
at 

[jira] [Commented] (SOLR-11916) new SortableTextField using docValues built from the original string input

2018-01-31 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347859#comment-16347859
 ] 

Hoss Man commented on SOLR-11916:
-


bq. re: useDocValuesAsStored – here's my straw man proposal after sleeping on 
it a bit...

I've updated the patch to implement this along with new tests.

Unless there are additional concerns/suggestions, i'll move forward 
w/committing & backporting tommorow.



bq. I had a number of use cases with an indexed field that required faceting 
over the original input (this would work with this type of field too, right?).

Yep yep ... absolutely.

For example, this sort of logic is currently in 
{{TestSortableTextField.testSimpleSearchAndFacets()}} ...

{code}
assertU(adoc("id","1", "whitespace_stxt", "how now brown cow ?"));
assertU(adoc("id","2", "whitespace_stxt", "how now brown cow ?"));
assertU(adoc("id","3", "whitespace_stxt", "holy cow !"));
assertU(adoc("id","4", "whitespace_stxt", "dog and cat"));

assertU(commit());

final String facet = "whitespace_stxt";
final String search = "whitespace_stxt";
// facet.field
final String fpre = "//lst[@name='facet_fields']/lst[@name='"+facet+"']/";
assertQ(req("q", search + ":cow", "rows", "0", 
"facet.field", facet, "facet", "true")
, "//*[@numFound='3']"
, fpre + "int[@name='how now brown cow ?'][.=2]"
, fpre + "int[@name='holy cow !'][.=1]"
, fpre + "int[@name='dog and cat'][.=0]"
);

// json facet
final String jpre = 
"//lst[@name='facets']/lst[@name='x']/arr[@name='buckets']/";
assertQ(req("q", search + ":cow", "rows", "0", 
"json.facet", "{x:{ type: terms, field:'" + facet + "', mincount:0 
}}")
, "//*[@numFound='3']"
, jpre + "lst[str[@name='val'][.='how now brown cow 
?']][int[@name='count'][.=2]]"
, jpre + "lst[str[@name='val'][.='holy cow 
!']][int[@name='count'][.=1]]"
, jpre + "lst[str[@name='val'][.='dog and 
cat']][int[@name='count'][.=0]]"
);
{code}

...allthough in the actual test: the "whitespace_stxt" field is copyFielded 
into many other fields w/ slightly diff configurations, and the "facet" and 
"search" variables are assigned in nested loops to prove that the "search" 
field behavior is consistent as long as the fields are indexed & the "facet" 
field behavior is consistent as long as the fields have docValues.

(In the latest patch, I even updated this to include a traditional TextField 
copy in the "search" permutations, and a traditional StrField copy in the 
"facet" permutations.)



> new SortableTextField using docValues built from the original string input
> --
>
> Key: SOLR-11916
> URL: https://issues.apache.org/jira/browse/SOLR-11916
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-11916.patch, SOLR-11916.patch
>
>
> I propose adding a new SortableTextField subclass that would functionally 
> work the same as TextField except:
>  * {{docValues="true|false"}} could be configured, with the default being 
> "true"
>  * The docValues would contain the original input values (just like StrField) 
> for sorting (or faceting)
>  ** By default, to protect users from excessively large docValues, only the 
> first 1024 of each field value would be used – but this could be overridden 
> with configuration.
> 
> Consider the following sample configuration:
> {code:java}
> indexed="true" docValues="true" stored="true" multiValued="false"/>
> 
>   
>...
>   
>   
>...
>   
> 
> {code}
> Given a document with a title of "Solr In Action"
> Users could:
>  * Search for individual (indexed) terms in the "title" field: 
> {{q=title:solr}}
>  * Sort documents by title ( {{sort=title asc}} ) such that this document's 
> sort value would be "Solr In Action"
> If another document had a "title" value that was longer then 1024 chars, then 
> the docValues would be built using only the first 1024 characters of the 
> value (unless the user modified the configuration)
> This would be functionally equivalent to the following existing configuration 
> - including the on disk index segments - except that the on disk DocValues 
> would refer directly to the "title" field, reducing the total number of 
> "field infos" in the index (which has a small impact on segment housekeeping 
> and merge times) and end users would not need to sort on an alternate 
> "title_string" field name - the original "title" field name would always be 
> used directly.
> {code:java}
> indexed="true" docValues="true" stored="true" multiValued="false"/>
> indexed="false" 

[jira] [Updated] (SOLR-11916) new SortableTextField using docValues built from the original string input

2018-01-31 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-11916:

Attachment: SOLR-11916.patch

> new SortableTextField using docValues built from the original string input
> --
>
> Key: SOLR-11916
> URL: https://issues.apache.org/jira/browse/SOLR-11916
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-11916.patch, SOLR-11916.patch
>
>
> I propose adding a new SortableTextField subclass that would functionally 
> work the same as TextField except:
>  * {{docValues="true|false"}} could be configured, with the default being 
> "true"
>  * The docValues would contain the original input values (just like StrField) 
> for sorting (or faceting)
>  ** By default, to protect users from excessively large docValues, only the 
> first 1024 of each field value would be used – but this could be overridden 
> with configuration.
> 
> Consider the following sample configuration:
> {code:java}
> indexed="true" docValues="true" stored="true" multiValued="false"/>
> 
>   
>...
>   
>   
>...
>   
> 
> {code}
> Given a document with a title of "Solr In Action"
> Users could:
>  * Search for individual (indexed) terms in the "title" field: 
> {{q=title:solr}}
>  * Sort documents by title ( {{sort=title asc}} ) such that this document's 
> sort value would be "Solr In Action"
> If another document had a "title" value that was longer then 1024 chars, then 
> the docValues would be built using only the first 1024 characters of the 
> value (unless the user modified the configuration)
> This would be functionally equivalent to the following existing configuration 
> - including the on disk index segments - except that the on disk DocValues 
> would refer directly to the "title" field, reducing the total number of 
> "field infos" in the index (which has a small impact on segment housekeeping 
> and merge times) and end users would not need to sort on an alternate 
> "title_string" field name - the original "title" field name would always be 
> used directly.
> {code:java}
> indexed="true" docValues="true" stored="true" multiValued="false"/>
> indexed="false" docValues="true" stored="false" multiValued="false"/>
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 429 - Still Unstable!

2018-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/429/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseParallelGC

6 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([18E1F61438DFE615:90B5C9CE96238BED]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ReplaceNodeNoTargetTest.test(ReplaceNodeNoTargetTest.java:92)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.TestDeleteCollectionOnDownNodes.deleteCollectionWithDownNodes

Error Message:
Timed out waiting for leader elections null Live Nodes: [127.0.0.1:59503_solr, 
127.0.0.1:59505_solr] Last available state: null

Stack Trace:
java.lang.AssertionError: Timed out waiting for leader elections
null
Live Nodes: 

[jira] [Commented] (LUCENE-8145) UnifiedHighlighter should use single OffsetEnum rather than List

2018-01-31 Thread Timothy M. Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347823#comment-16347823
 ] 

Timothy M. Rodriguez commented on LUCENE-8145:
--

Thanks for the CC [~dsmiley].

[~romseygeek] really nice change!  Definitely simplifies things quite a bit and 
conceptually one meta OffsetEnum over the field makes more sense than the list 
from previous.

I'm in favor of keeping the summed frequency on MTQ or at least preserving a 
mechanism to keep it on.  The extra occurrences may not always seem spurious in 
all cases.  For example, consider "expert" systems where users are accustomed 
to using wildcards for stemming-like expressions.  E.g. purchas* for getting 
variants of the word purchase.  In those cases, the extra frequency counts 
would hopefully select a better passage.



I'm not so sure about setScore being passed in a scorer and content length to 
set the score though. That feels awkward to me.  If we were to keep it this 
way, I'd argue a Passage should receive the PassageScorer and content length at 
construction instead of via the setScore method.  If we did that, I think we 
could incrementally build the score instead of tracking terms and frequencies 
for a later score calculation?  Another choice is to move a lot of scoring 
behavior and perhaps introduce another class that's tracking the terms and 
score in a passage analagous to Weight?

 

 

> UnifiedHighlighter should use single OffsetEnum rather than List
> 
>
> Key: LUCENE-8145
> URL: https://issues.apache.org/jira/browse/LUCENE-8145
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-8145.patch
>
>
> The UnifiedHighlighter deals with several different aspects of highlighting: 
> finding highlight offsets, breaking content up into snippets, and passage 
> scoring.  It would be nice to split this up so that consumers can use them 
> separately.
> As a first step, I'd like to change the API of FieldOffsetStrategy to return 
> a single unified OffsetsEnum, rather than a collection of them.  This will 
> make it easier to expose the OffsetsEnum of a document directly from the 
> highlighter, bypassing snippet extraction and scoring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11858) NPE in DirectSpellChecker

2018-01-31 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347822#comment-16347822
 ] 

Robert Muir commented on SOLR-11858:


I opened LUCENE-8147 to give better exceptions here. It may not ultimately be 
related to this issue but it is needed.

> NPE in DirectSpellChecker
> -
>
> Key: SOLR-11858
> URL: https://issues.apache.org/jira/browse/SOLR-11858
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Priority: Major
> Fix For: master (8.0), 7.3
>
>
> We just came across the following NPE. It seems this NPE only appears when 
> the query is incorrectly spelled but response has more than 0 results. We 
> have not observed this on other 7.1.0 deployments. 
> {code}
> 2018-01-16 09:15:00.009 ERROR (qtp329611835-19) [c] o.a.s.h.RequestHand
> lerBase java.lang.NullPointerException
>  at 
> org.apache.lucene.search.spell.DirectSpellChecker.suggestSimilar(DirectSpellChecker.java:421)
>  at 
> org.apache.lucene.search.spell.DirectSpellChecker.suggestSimilar(DirectSpellChecker.java:353)
>  at 
> org.apache.solr.spelling.DirectSolrSpellChecker.getSuggestions(DirectSolrSpellChecker.java:186)
>  at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:195)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2484)
>  at 
> {code}
> Config:
> {code}
>  
> text_general
> 
>   default
>   spellcheck
>   solr.DirectSolrSpellChecker
>   internal
>   0.5
>   2
>   1
>   5
>   4
>   0.01
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8147) DirectSpellChecker needs better parameter checks

2018-01-31 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-8147:
---

 Summary: DirectSpellChecker needs better parameter checks
 Key: LUCENE-8147
 URL: https://issues.apache.org/jira/browse/LUCENE-8147
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/suggest
Reporter: Robert Muir


This thing has a lot of parameters (and option setters too), but looks like it 
really needs better checks. For example if i ask for zero suggestions I think 
it may give a confusing NPE instead: SOLR-11858

There are probably other cases too: we should add all the missing checks and 
give IllegalArgumentExceptions and so on instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11858) NPE in DirectSpellChecker

2018-01-31 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347821#comment-16347821
 ] 

Robert Muir commented on SOLR-11858:


My best guess is the high level bug is here: 
[https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/spelling/DirectSolrSpellChecker.java#L185]

Basically from what I see from the stacktrace, it can happen easily if 
something asks for zero or negative suggestions... and unfortunately you'll get 
a crazy exception instead of a simple IllegalArgumentException. We should add 
some parameter checks to lucene so that you get a good exception instead.

As far as the solr code calling it, thats a separate issue... i have no idea, 
because i don't know much about this SpellingOptions, but one of the values in 
question defaults to zero...

> NPE in DirectSpellChecker
> -
>
> Key: SOLR-11858
> URL: https://issues.apache.org/jira/browse/SOLR-11858
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Priority: Major
> Fix For: master (8.0), 7.3
>
>
> We just came across the following NPE. It seems this NPE only appears when 
> the query is incorrectly spelled but response has more than 0 results. We 
> have not observed this on other 7.1.0 deployments. 
> {code}
> 2018-01-16 09:15:00.009 ERROR (qtp329611835-19) [c] o.a.s.h.RequestHand
> lerBase java.lang.NullPointerException
>  at 
> org.apache.lucene.search.spell.DirectSpellChecker.suggestSimilar(DirectSpellChecker.java:421)
>  at 
> org.apache.lucene.search.spell.DirectSpellChecker.suggestSimilar(DirectSpellChecker.java:353)
>  at 
> org.apache.solr.spelling.DirectSolrSpellChecker.getSuggestions(DirectSolrSpellChecker.java:186)
>  at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:195)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2484)
>  at 
> {code}
> Config:
> {code}
>  
> text_general
> 
>   default
>   spellcheck
>   solr.DirectSolrSpellChecker
>   internal
>   0.5
>   2
>   1
>   5
>   4
>   0.01
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 1270 - Still Unstable!

2018-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1270/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC

3 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([96E1DD2C3A0CCDED:1EB5E2F694F0A015]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ReplaceNodeNoTargetTest.test(ReplaceNodeNoTargetTest.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testSetProperties

Error Message:
Expected 8 triggers but found: [x0, x1, x2, x3, x4, x5, x6] expected:<8> but 
was:<7>

Stack Trace:
java.lang.AssertionError: Expected 8 triggers but found: [x0, x1, x2, x3, x4, 
x5, x6] expected:<8> but was:<7>
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4417 - Still unstable!

2018-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4417/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingSorting

Error Message:
Should have exactly 4 documents returned expected:<4> but was:<3>

Stack Trace:
java.lang.AssertionError: Should have exactly 4 documents returned expected:<4> 
but was:<3>
at 
__randomizedtesting.SeedInfo.seed([2EB9A973366553B3:3081A17B4ACEE933]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.checkSortOrder(DocValuesNotIndexedTest.java:260)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingSorting(DocValuesNotIndexedTest.java:245)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 937 - Still Failing

2018-01-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/937/

No tests ran.

Build Log:
[...truncated 28244 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (32.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 30.2 MB in 0.02 sec (1248.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 73.0 MB in 0.07 sec (1094.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 83.5 MB in 0.07 sec (1114.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6241 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6241 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 212 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (101.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 52.6 MB in 0.20 sec (261.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 151.5 MB in 0.60 sec (251.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 152.5 MB in 0.18 sec (866.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8
   [smoker] *** [WARN] *** Your open file limit is currently 6.  
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or solr.in.sh
   [smoker] *** [WARN] ***  Your Max Processes Limit is currently 10240. 
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 21371 - Still Unstable!

2018-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21371/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([BD05416CA23424DF:35517EB60CC84927]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:915)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-8145) UnifiedHighlighter should use single OffsetEnum rather than List

2018-01-31 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347692#comment-16347692
 ] 

David Smiley commented on LUCENE-8145:
--

{quote}but I wonder if we need to bother with the frequency summing?
{quote}
It's debatable. Consider an aggressive MTQ like {{st*}} that hypothetically 
matches a lot of terms that each occur one time. Passages with those terms will 
be scored higher than a term query that matched twice.

It would be cool if we could further affect the passage score by a term's 
string-distance to the automata string. For example if "st" would have it's 
score dampened quite a bit if it matches "strangelyLongWord" but say only a 
small dampening for "stir". Artificially increasing the frequency would be one 
way, albeit less flexible than some other hook. If we had something like this, 
it'd probably matter less how accurate the frequency is since I think people 
would want to dampen the score for any MTQ.

Hmmm. With if Passage.setScore remains a simple setter, but we add 
PassageScorer.computeScore(Passage, int contentLength)?  We'd need to expose 
more data from Passage that you added, granted, but it sure adds some 
flexibility!

CC [~Timothy055]

> UnifiedHighlighter should use single OffsetEnum rather than List
> 
>
> Key: LUCENE-8145
> URL: https://issues.apache.org/jira/browse/LUCENE-8145
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-8145.patch
>
>
> The UnifiedHighlighter deals with several different aspects of highlighting: 
> finding highlight offsets, breaking content up into snippets, and passage 
> scoring.  It would be nice to split this up so that consumers can use them 
> separately.
> As a first step, I'd like to change the API of FieldOffsetStrategy to return 
> a single unified OffsetsEnum, rather than a collection of them.  This will 
> make it easier to expose the OffsetsEnum of a document directly from the 
> highlighter, bypassing snippet extraction and scoring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11858) NPE in DirectSpellChecker

2018-01-31 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347659#comment-16347659
 ] 

Markus Jelsma commented on SOLR-11858:
--

Hello Cassandra.

Your steps to reproduce account for the fact that it needs a misspelling and 
more than 0 results. Apparently, that is not the only requirement for the 
problem to occur.

When i reported this issue. it was the first time saw it, and the last time so 
far. In the mean time we had plenty of times where there was a misspelling and 
more than 0 results.

So i must apologize, i have no idea how to reproduce it in any consistent 
manner.

But let's leave it open for now so others can find it.

> NPE in DirectSpellChecker
> -
>
> Key: SOLR-11858
> URL: https://issues.apache.org/jira/browse/SOLR-11858
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Priority: Major
> Fix For: master (8.0), 7.3
>
>
> We just came across the following NPE. It seems this NPE only appears when 
> the query is incorrectly spelled but response has more than 0 results. We 
> have not observed this on other 7.1.0 deployments. 
> {code}
> 2018-01-16 09:15:00.009 ERROR (qtp329611835-19) [c] o.a.s.h.RequestHand
> lerBase java.lang.NullPointerException
>  at 
> org.apache.lucene.search.spell.DirectSpellChecker.suggestSimilar(DirectSpellChecker.java:421)
>  at 
> org.apache.lucene.search.spell.DirectSpellChecker.suggestSimilar(DirectSpellChecker.java:353)
>  at 
> org.apache.solr.spelling.DirectSolrSpellChecker.getSuggestions(DirectSolrSpellChecker.java:186)
>  at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:195)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2484)
>  at 
> {code}
> Config:
> {code}
>  
> text_general
> 
>   default
>   spellcheck
>   solr.DirectSolrSpellChecker
>   internal
>   0.5
>   2
>   1
>   5
>   4
>   0.01
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7966) build mr-jar and use some java 9 methods if available

2018-01-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347613#comment-16347613
 ] 

Uwe Schindler edited comment on LUCENE-7966 at 1/31/18 9:38 PM:


FYI, here are my changes to [~mikemccand]'s lucenebench to use JAR files 
instead of class files from source checkout. It's not complete, it just works. 
Please neglect my ignorance for Python: https://paste.apache.org/V5LA

Without it, you would see a slowdown with Java 9, added by the additional 
checks! (I tested it)


was (Author: thetaphi):
FYI, here are my changes to [~mikemccand]'s lucenebench to use JAR files 
instead of class files from source checkout. It's not complete, it just works. 
Please neglect my ignorance for Python: https://paste.apache.org/V5LA

Without it, you would see a slowdown with Java added by the additional checks! 
(I tested it)

> build mr-jar and use some java 9 methods if available
> -
>
> Key: LUCENE-7966
> URL: https://issues.apache.org/jira/browse/LUCENE-7966
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other, general/build
>Reporter: Robert Muir
>Priority: Major
>  Labels: Java9
> Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, 
> LUCENE-7966.patch, LUCENE-7966.patch
>
>
> See background: http://openjdk.java.net/jeps/238
> It would be nice to use some of the newer array methods and range checking 
> methods in java 9 for example, without waiting for lucene 10 or something. If 
> we build an MR-jar, we can start migrating our code to use java 9 methods 
> right now, it will use optimized methods from java 9 when thats available, 
> otherwise fall back to java 8 code.  
> This patch adds:
> {code}
> Objects.checkIndex(int,int)
> Objects.checkFromToIndex(int,int,int)
> Objects.checkFromIndexSize(int,int,int)
> Arrays.mismatch(byte[],int,int,byte[],int,int)
> Arrays.compareUnsigned(byte[],int,int,byte[],int,int)
> Arrays.equal(byte[],int,int,byte[],int,int)
> // did not add char/int/long/short/etc but of course its possible if needed
> {code}
> It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java 
> methods. This way, we can simply directly replace call sites with java 9 
> methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only 
> have to worry about testing that our java 8 fallback methods work.
> I found that many of the current byte array methods today are willy-nilly and 
> very lenient for example, passing invalid offsets at times and relying on 
> compare methods not throwing exceptions, etc. I fixed all the instances in 
> core/codecs but have not looked at the problems with AnalyzingSuggester. Also 
> SimpleText still uses a silly method in ArrayUtil in similar crazy way, have 
> not removed that one yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7966) build mr-jar and use some java 9 methods if available

2018-01-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347613#comment-16347613
 ] 

Uwe Schindler commented on LUCENE-7966:
---

FYI, here are my changes to [~mikemccand]'s lucenebench to use JAR files 
instead of class files from source checkout. It's not complete, it just works. 
Please neglect my ignorance for Python: https://paste.apache.org/V5LA

Without it, you would see a slowdown with Java added by the additional checks! 
(I tested it)

> build mr-jar and use some java 9 methods if available
> -
>
> Key: LUCENE-7966
> URL: https://issues.apache.org/jira/browse/LUCENE-7966
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other, general/build
>Reporter: Robert Muir
>Priority: Major
>  Labels: Java9
> Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, 
> LUCENE-7966.patch, LUCENE-7966.patch
>
>
> See background: http://openjdk.java.net/jeps/238
> It would be nice to use some of the newer array methods and range checking 
> methods in java 9 for example, without waiting for lucene 10 or something. If 
> we build an MR-jar, we can start migrating our code to use java 9 methods 
> right now, it will use optimized methods from java 9 when thats available, 
> otherwise fall back to java 8 code.  
> This patch adds:
> {code}
> Objects.checkIndex(int,int)
> Objects.checkFromToIndex(int,int,int)
> Objects.checkFromIndexSize(int,int,int)
> Arrays.mismatch(byte[],int,int,byte[],int,int)
> Arrays.compareUnsigned(byte[],int,int,byte[],int,int)
> Arrays.equal(byte[],int,int,byte[],int,int)
> // did not add char/int/long/short/etc but of course its possible if needed
> {code}
> It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java 
> methods. This way, we can simply directly replace call sites with java 9 
> methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only 
> have to worry about testing that our java 8 fallback methods work.
> I found that many of the current byte array methods today are willy-nilly and 
> very lenient for example, passing invalid offsets at times and relying on 
> compare methods not throwing exceptions, etc. I fixed all the instances in 
> core/codecs but have not looked at the problems with AnalyzingSuggester. Also 
> SimpleText still uses a silly method in ArrayUtil in similar crazy way, have 
> not removed that one yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8145) UnifiedHighlighter should use single OffsetEnum rather than List

2018-01-31 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347602#comment-16347602
 ] 

Alan Woodward commented on LUCENE-8145:
---

Thanks for the review David!  I'll put up another patch shortly with your 
suggestions.

re Automata - I agree that we can replace CompositeOffsetsPostingsEnum, but I 
wonder if we need to bother with the frequency summing?  It would make more 
sense I think to preserve the freqs of the individual term matches, so that a 
rarer term is more relevant than a more frequent one.  We don't do this with 
wildcard queries in general because of performance, but that's not an issue 
here.

Passage is heavier now, but the objects are re-used, and only n-fragments + 1 
are build for each highlighted doc, so I'm not too concerned.

> UnifiedHighlighter should use single OffsetEnum rather than List
> 
>
> Key: LUCENE-8145
> URL: https://issues.apache.org/jira/browse/LUCENE-8145
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-8145.patch
>
>
> The UnifiedHighlighter deals with several different aspects of highlighting: 
> finding highlight offsets, breaking content up into snippets, and passage 
> scoring.  It would be nice to split this up so that consumers can use them 
> separately.
> As a first step, I'd like to change the API of FieldOffsetStrategy to return 
> a single unified OffsetsEnum, rather than a collection of them.  This will 
> make it easier to expose the OffsetsEnum of a document directly from the 
> highlighter, bypassing snippet extraction and scoring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7966) build mr-jar and use some java 9 methods if available

2018-01-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347599#comment-16347599
 ] 

Uwe Schindler commented on LUCENE-7966:
---

Hi,
I did some comparing benchmarks using Mike's benchmark tool (luceneutil). In 
general the performance difference between Java 8 and Java 9 is neglectible 
(more about this in my talk next week in London), if you use the usual Parallel 
or CMS GC. Some queries tend to be slower on Java 9. I also compared this patch:

Java 9 without patch and with patch:

{noformat}
Task QPS orig_j9  StdDevQPS patch_j9  StdDev
Pct diff
  IntNRQ5.97  (8.8%)5.76  (8.4%)   
-3.7% ( -19% -   14%)
 Prefix3   59.77  (7.3%)   58.59  (7.5%)   
-2.0% ( -15% -   13%)
Wildcard   18.62  (5.3%)   18.38  (5.9%)   
-1.3% ( -11% -   10%)
HighSpanNear   13.04  (4.4%)   12.93  (4.9%)   
-0.9% (  -9% -8%)
 MedSpanNear   11.36  (3.9%)   11.27  (4.3%)   
-0.8% (  -8% -7%)
 Respell   51.17  (2.1%)   50.79  (1.6%)   
-0.7% (  -4% -3%)
PKLookup  256.20  (5.4%)  255.83  (5.9%)   
-0.1% ( -10% -   11%)
  Fuzzy1   24.12  (2.8%)   24.09  (2.3%)   
-0.1% (  -5% -5%)
 LowSpanNear   10.38  (2.1%)   10.37  (2.2%)   
-0.1% (  -4% -4%)
   MedPhrase   27.76  (1.9%)   27.74  (1.9%)   
-0.1% (  -3% -3%)
  Fuzzy2   70.57  (1.8%)   70.59  (1.6%)
0.0% (  -3% -3%)
  HighPhrase   14.21  (2.2%)   14.22  (2.4%)
0.1% (  -4% -4%)
 AndHighHigh   34.11  (1.2%)   34.15  (0.7%)
0.1% (  -1% -2%)
   LowPhrase   15.98  (1.7%)   16.01  (1.6%)
0.2% (  -3% -3%)
OrNotHighLow  531.86  (3.5%)  534.36  (3.3%)
0.5% (  -6% -7%)
  AndHighMed  170.44  (1.2%)  171.46  (1.2%)
0.6% (  -1% -3%)
OrNotHighMed  206.78  (1.8%)  208.06  (2.2%)
0.6% (  -3% -4%)
   OrHighMed   20.61  (5.2%)   20.76  (4.3%)
0.7% (  -8% -   10%)
  OrHighHigh   11.07  (5.6%)   11.17  (4.6%)
0.9% (  -8% -   11%)
   OrHighNotHigh   24.57  (4.0%)   24.80  (4.7%)
0.9% (  -7% -9%)
OrHighNotMed   50.41  (4.0%)   50.88  (5.1%)
0.9% (  -7% -   10%)
 LowTerm  202.23  (2.4%)  204.33  (3.4%)
1.0% (  -4% -7%)
  AndHighLow  745.13  (3.1%)  753.23  (2.9%)
1.1% (  -4% -7%)
   OrNotHighHigh   12.48  (4.1%)   12.63  (5.2%)
1.2% (  -7% -   10%)
 LowSloppyPhrase3.79  (5.3%)3.85  (5.5%)
1.5% (  -8% -   13%)
HighSloppyPhrase   10.58  (4.0%)   10.74  (4.3%)
1.5% (  -6% -   10%)
OrHighNotLow   18.46  (4.4%)   18.75  (5.3%)
1.6% (  -7% -   11%)
 MedSloppyPhrase   28.88  (4.4%)   29.35  (4.8%)
1.6% (  -7% -   11%)
   OrHighLow   15.26  (3.0%)   15.54  (3.0%)
1.9% (  -4% -8%)
   HighTermDayOfYearSort   19.83  (6.6%)   20.25  (8.1%)
2.1% ( -11% -   17%)
 MedTerm   64.23  (5.0%)   65.64  (7.0%)
2.2% (  -9% -   14%)
HighTerm   40.05  (5.4%)   41.02  (7.6%)
2.4% ( -10% -   16%)
   HighTermMonthSort   87.80 (12.5%)   91.28 (12.3%)
4.0% ( -18% -   32%)
{noformat}

So it does not hurt performance, although it adds additional checks that ensure 
index consistency! Thanks Robert for exploring the parts in code where bounds 
checks were missing! As you see, especially the "sorting" stuff got a slight 
reproducible improvement (although stddev is still large!). This might be 
related to optimized bounds checking code when reading docvalues and 
bytebuffers.

I also compared Java 8 to be safe:

{noformat}
Task QPS orig_j8  StdDevQPS patch_j8  StdDev
Pct diff
   HighTermDayOfYearSort   23.65  (9.1%)   22.98  (6.6%)   
-2.8% ( -17% -   14%)
   OrHighMed9.87  (4.3%)9.76  (2.9%)   
-1.0% (  -7% -6%)
  OrHighHigh   11.85  (4.1%)   11.73  (2.8%)   
-1.0% (  -7% -6%)
 MedSpanNear  149.54  (4.2%)  148.97  (3.7%)   
-0.4% (  -7% -7%)
HighSloppyPhrase0.43  (5.4%)0.43  (6.1%)
0.0% ( -10% -   12%)

[jira] [Updated] (SOLR-11933) DIH gui shouldn't have "clean" be checked by default

2018-01-31 Thread Eric Pugh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh updated SOLR-11933:
-
Attachment: fef1d06a2eb15a0fd36eb91124af413a19d95528.diff

> DIH gui shouldn't have "clean" be checked by default
> 
>
> Key: SOLR-11933
> URL: https://issues.apache.org/jira/browse/SOLR-11933
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 7.2
>Reporter: Eric Pugh
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: fef1d06a2eb15a0fd36eb91124af413a19d95528.diff
>
>
> The DIH webapp by default has the "clean" checkbox enabled.   Clean is very 
> dangerous because you delete all the data first, and then load the data.   
> Making this the default choice is bad UX.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #316: make DIH safer by not default checking the cl...

2018-01-31 Thread epugh
GitHub user epugh opened a pull request:

https://github.com/apache/lucene-solr/pull/316

make DIH safer by not default checking the clean checkbox



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/epugh/lucene-solr solr-11933

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/316.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #316


commit fef1d06a2eb15a0fd36eb91124af413a19d95528
Author: epugh 
Date:   2018-01-31T20:41:18Z

make DIH safer by not default checking the clean checkbox




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11933) DIH gui shouldn't have "clean" be checked by default

2018-01-31 Thread Eric Pugh (JIRA)
Eric Pugh created SOLR-11933:


 Summary: DIH gui shouldn't have "clean" be checked by default
 Key: SOLR-11933
 URL: https://issues.apache.org/jira/browse/SOLR-11933
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: contrib - DataImportHandler
Affects Versions: 7.2
Reporter: Eric Pugh
 Fix For: master (8.0)


The DIH webapp by default has the "clean" checkbox enabled.   Clean is very 
dangerous because you delete all the data first, and then load the data.   
Making this the default choice is bad UX.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-11880) Avoid creating new exceptions for every request made via MDCAwareThreadPoolExecutor

2018-01-31 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11880:
--
Comment: was deleted

(was: Commit 32ca9cf4d83731511c0cdfa073659247959677cc in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=32ca9cf ]

SOLR-11880: ref-guide
)

> Avoid creating new exceptions for every request made via 
> MDCAwareThreadPoolExecutor
> ---
>
> Key: SOLR-11880
> URL: https://issues.apache.org/jira/browse/SOLR-11880
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Noble Paul
>Priority: Minor
>
> MDCAwareThreadPoolExecutor has this line in it's{{{execute}} method
>  
> {code:java}
> final Exception submitterStackTrace = new Exception("Submitter stack 
> trace");{code}
> This means that every call via the a thread pool will create this exception, 
> and only when it sees an error will it be used. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-11880) Avoid creating new exceptions for every request made via MDCAwareThreadPoolExecutor

2018-01-31 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11880:
--
Comment: was deleted

(was: Commit 0b0e8e5e7a67362c5757c1df4cee249ad193b51b in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0b0e8e5 ]

SOLR-11880: ref-guide
)

> Avoid creating new exceptions for every request made via 
> MDCAwareThreadPoolExecutor
> ---
>
> Key: SOLR-11880
> URL: https://issues.apache.org/jira/browse/SOLR-11880
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Noble Paul
>Priority: Minor
>
> MDCAwareThreadPoolExecutor has this line in it's{{{execute}} method
>  
> {code:java}
> final Exception submitterStackTrace = new Exception("Submitter stack 
> trace");{code}
> This means that every call via the a thread pool will create this exception, 
> and only when it sees an error will it be used. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11932) ZkCmdExecutor: Retry ZkOperation on SessionExpired

2018-01-31 Thread John Gallagher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Gallagher updated SOLR-11932:
--
Summary: ZkCmdExecutor: Retry ZkOperation on SessionExpired   (was: 
ZkCmdExector: Retry ZkOperation on SessionExpired )

> ZkCmdExecutor: Retry ZkOperation on SessionExpired 
> ---
>
> Key: SOLR-11932
> URL: https://issues.apache.org/jira/browse/SOLR-11932
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: John Gallagher
>Priority: Major
> Attachments: SessionExpiredLog.txt, zk_retry.patch
>
>
> We are seeing situations where an operation, such as changing a replica's 
> state to active after a recovery, fails because the zk session has expired.
> However, these operations seem like they are retryable, because the 
> ZookeeperConnect receives an event that the session expired and tries to 
> reconnect.
> That makes the SessionExpired handling scenario seem very similar to the 
> ConnectionLoss handling scenario, so the ZkCmdExecutor seems like it could 
> handle them in the same way.
>  
> Here's an example stack trace with some slight redactions: 
> [^SessionExpiredLog.txt]  In this case, a zk operation (a read) failed with a 
> SessionExpired event, which seems retriable.  The exception kicked off a 
> reconnection, but seems like the subsequent operation, (publishing as active) 
> failed (perhaps it was using a stale connection handle at that point?)
>  
> Regardless, the watch mechanism that reestablishes connection on 
> SessionExpired seems sufficient to allow the ZkCmdExecutor to retry that 
> operation at a later time and have hope of succeeding.
>  
> I have included a simple patch we are trying that catches both exceptions 
> instead of just ConnectionLossException: [^zk_retry.patch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11067) REPLACENODE should make it optional to provide a target node

2018-01-31 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347549#comment-16347549
 ] 

Noble Paul commented on SOLR-11067:
---

Commit 32ca9cf4d83731511c0cdfa073659247959677cc in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=32ca9cf ]

SOLR-11880: ref-guide


> REPLACENODE should make it optional to provide a target node
> 
>
> Key: SOLR-11067
> URL: https://issues.apache.org/jira/browse/SOLR-11067
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11067.patch
>
>
> The REPLACENODE API currently accepts a replacement target and moves all 
> replicas from the source to the given target. We can improve this by having 
> it figure out the right target node for each replica contained in the source.
> This can also then be a thin wrapper over nodeLost event just like how 
> UTILIZENODE (SOLR-9743) can be a wrapper over nodeAdded event.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11880) Avoid creating new exceptions for every request made via MDCAwareThreadPoolExecutor

2018-01-31 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347545#comment-16347545
 ] 

Noble Paul commented on SOLR-11880:
---

[~ctargett] right. this is what happens when you work on multiple tickets at 
the same time ;)

> Avoid creating new exceptions for every request made via 
> MDCAwareThreadPoolExecutor
> ---
>
> Key: SOLR-11880
> URL: https://issues.apache.org/jira/browse/SOLR-11880
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Noble Paul
>Priority: Minor
>
> MDCAwareThreadPoolExecutor has this line in it's{{{execute}} method
>  
> {code:java}
> final Exception submitterStackTrace = new Exception("Submitter stack 
> trace");{code}
> This means that every call via the a thread pool will create this exception, 
> and only when it sees an error will it be used. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11067) REPLACENODE should make it optional to provide a target node

2018-01-31 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347546#comment-16347546
 ] 

Noble Paul commented on SOLR-11067:
---

Commit 0b0e8e5e7a67362c5757c1df4cee249ad193b51b in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0b0e8e5 ]

SOLR-11880: ref-guide


> REPLACENODE should make it optional to provide a target node
> 
>
> Key: SOLR-11067
> URL: https://issues.apache.org/jira/browse/SOLR-11067
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11067.patch
>
>
> The REPLACENODE API currently accepts a replacement target and moves all 
> replicas from the source to the given target. We can improve this by having 
> it figure out the right target node for each replica contained in the source.
> This can also then be a thin wrapper over nodeLost event just like how 
> UTILIZENODE (SOLR-9743) can be a wrapper over nodeAdded event.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11932) ZkCmdExector: Retry ZkOperation on SessionExpired

2018-01-31 Thread John Gallagher (JIRA)
John Gallagher created SOLR-11932:
-

 Summary: ZkCmdExector: Retry ZkOperation on SessionExpired 
 Key: SOLR-11932
 URL: https://issues.apache.org/jira/browse/SOLR-11932
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.2
Reporter: John Gallagher
 Attachments: SessionExpiredLog.txt, zk_retry.patch

We are seeing situations where an operation, such as changing a replica's state 
to active after a recovery, fails because the zk session has expired.

However, these operations seem like they are retryable, because the 
ZookeeperConnect receives an event that the session expired and tries to 
reconnect.

That makes the SessionExpired handling scenario seem very similar to the 
ConnectionLoss handling scenario, so the ZkCmdExecutor seems like it could 
handle them in the same way.

 

Here's an example stack trace with some slight redactions: 
[^SessionExpiredLog.txt]  In this case, a zk operation (a read) failed with a 
SessionExpired event, which seems retriable.  The exception kicked off a 
reconnection, but seems like the subsequent operation, (publishing as active) 
failed (perhaps it was using a stale connection handle at that point?)

 

Regardless, the watch mechanism that reestablishes connection on SessionExpired 
seems sufficient to allow the ZkCmdExecutor to retry that operation at a later 
time and have hope of succeeding.

 

I have included a simple patch we are trying that catches both exceptions 
instead of just ConnectionLossException: [^zk_retry.patch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8146) Unit tests using StringHelper fail with ExceptionInInitializerError for maven surefire >= 2.18

2018-01-31 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347537#comment-16347537
 ] 

Dawid Weiss commented on LUCENE-8146:
-

Btw. I agree with Robert that the simplest fix (for now at least) is to adjust 
StringUtils to what he suggested (this is consistent with what randomized 
runner does too).

https://github.com/randomizedtesting/randomizedtesting/blob/master/randomized-runner/src/main/java/com/carrotsearch/randomizedtesting/RandomizedRunner.java#L349

> Unit tests using StringHelper fail with ExceptionInInitializerError for maven 
> surefire >= 2.18
> --
>
> Key: LUCENE-8146
> URL: https://issues.apache.org/jira/browse/LUCENE-8146
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.2.1
>Reporter: Julien Massenet
>Priority: Minor
> Attachments: LUCENE-8146-seed_issue.tar.gz, LUCENE-8146_v1.patch
>
>
> This happens when multiple conditions are met:
>  * The client code is built with Maven
>  * To execute its unit tests, the client code relies on the 
> {{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
> version)
>  * The client code uses the {{org.apache.lucene.util.StringHelper}} class 
> (even transitively)
>  * The client is configured as with the standard Lucene maven build (i.e. it 
> is possible to fix the test seed using the {{tests.seed}} property)
> There was a change in Surefire's behavior starting with 2.18: when a property 
> is empty, instead of not sending it to the test runner, it will be sent with 
> an empty value.
> This behavior can be observed with the attached sample project:
>  * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
>  * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to 
> a real value
>  * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire 
> version is lower than 2.18
> Attached is a patch (built against \{{branch_7x}}) that centralizes accesses 
> to the {{tests.seed}} system property; it also makes sure that if it is 
> empty, it is treated as absent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11066) Implement a scheduled trigger

2018-01-31 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347521#comment-16347521
 ] 

Shalin Shekhar Mangar commented on SOLR-11066:
--

Thanks David, Gus and Andrzej!

The latest patch has the following changes:
# graceTime is renamed to graceDuration
# The trigger accepts a startTime which must be an ISO-8601 date time string 
ending with the 'Z' (signalling UTC) or without 'Z'. If the date time is 
without 'Z' then a time zone must be specified.
# The date math is done using the given time zone or UTC if none given
# The calculations in the trigger are all done using the Instant class
# The configuration stored in ZK is exactly what the user provided while 
creating the API.

bq. It's not clear to me looking at the patch how these ScheduledTriggers are 
created.  Can you please explain?  I was anticipating some new API call to 
create the trigger.

The trigger is created using the existing set-trigger API so no API changes 
were made. This is also the reason behind keeping the configuration in ZK the 
same as what was given by the user. The alternative would have been for the API 
to know that scheduled trigger is special and it may require modifying the 
provided startTime value to UTC. The downside to the current approach is that 
there is no validation of the properties until a Trigger is instantiated or 
initialized.

I'm going to open another issue to add a general validation API to Trigger 
which can be used for validating input configuration to avoid set-trigger calls 
with bad input succeeding.

> Implement a scheduled trigger
> -
>
> Key: SOLR-11066
> URL: https://issues.apache.org/jira/browse/SOLR-11066
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11066.patch, SOLR-11066.patch, SOLR-11066.patch
>
>
> Implement a trigger that runs on a fixed interval say every 1 hour or every 
> 24 hours starting at midnight etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8144) Remove QueryCachingPolicy.ALWAYS_CACHE

2018-01-31 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347513#comment-16347513
 ] 

Robert Muir commented on LUCENE-8144:
-

+1

> Remove QueryCachingPolicy.ALWAYS_CACHE
> --
>
> Key: LUCENE-8144
> URL: https://issues.apache.org/jira/browse/LUCENE-8144
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
>
> This one is trappy, it looks simple and cool but caching without evidence of 
> reuse is usually a bad idea as it removes the ability to skip over non 
> interesting documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11066) Implement a scheduled trigger

2018-01-31 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-11066:
-
Attachment: SOLR-11066.patch

> Implement a scheduled trigger
> -
>
> Key: SOLR-11066
> URL: https://issues.apache.org/jira/browse/SOLR-11066
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11066.patch, SOLR-11066.patch, SOLR-11066.patch
>
>
> Implement a trigger that runs on a fixed interval say every 1 hour or every 
> 24 hours starting at midnight etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2289 - Still Unstable

2018-01-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2289/

7 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.facet.PivotFacetTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.analytics.facet.PivotFacetTest: 1) Thread[id=59, 
name=qtp88290436-59, state=TIMED_WAITING, group=TGRP-PivotFacetTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.analytics.facet.PivotFacetTest: 
   1) Thread[id=59, name=qtp88290436-59, state=TIMED_WAITING, 
group=TGRP-PivotFacetTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([8AC9E657FA81F3EA]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.facet.PivotFacetTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=59, 
name=qtp88290436-59, state=TIMED_WAITING, group=TGRP-PivotFacetTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=59, name=qtp88290436-59, state=TIMED_WAITING, 
group=TGRP-PivotFacetTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([8AC9E657FA81F3EA]:0)


FAILED:  org.apache.solr.cloud.AliasIntegrationTest.testModifyMetadataCAR

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([443E3E06A86FE42E:586C7F4D15B44476]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.AliasIntegrationTest.checkFooAndBarMeta(AliasIntegrationTest.java:283)
at 
org.apache.solr.cloud.AliasIntegrationTest.testModifyMetadataCAR(AliasIntegrationTest.java:262)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 

[jira] [Commented] (LUCENE-8145) UnifiedHighlighter should use single OffsetEnum rather than List

2018-01-31 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347433#comment-16347433
 ] 

David Smiley commented on LUCENE-8145:
--

*This is a really nice improvement Alan!*  I once thought about replacing 
List with a PriorityQueue based one as you did here but I didn't 
follow through because I didn't know how the OffsetsEnum.weight would 
accommodate that.  Here you've shifted that tracking to the Passage.

We can go further with your refactoring -- making CompositeOffsetsPostingsEnum 
obsolete.  Delete it and then change 
FieldOffsetStrategy.createOffsetsEnumsForAutomata so that it's final loop looks 
like this:
{code}
for (int i = 0; i < automata.length; i++) {
  CharacterRunAutomaton automaton = automata[i];
  List postingsEnums = automataPostings.get(i);
  if (postingsEnums.isEmpty()) {
continue;
  }
  // Build one OffsetsEnum exposing the automata.toString as the term, and 
the sum of freq
  BytesRef wildcardTerm = new BytesRef(automaton.toString());
  int sumFreq = 0;
  for (PostingsEnum postingsEnum : postingsEnums) {
sumFreq += postingsEnum.freq();
  }
  for (PostingsEnum postingsEnum : postingsEnums) {
results.add(new OffsetsEnum.OfPostings(wildcardTerm, sumFreq, 
postingsEnum));
  }
}
{code}
And then overload the OfPostings constructor to take a manual "freq". Or 
Subclass OfPostings in-kind (e.g. OfPostingsWithFreqc ould be inner class even 
(I played with this a little; it was annoying that sumFreq couldn't be final 
but no biggie).

Unfortunately it seems we have no test that ensures the "freq" is correct for a 
highlighted wildcard query whose wildcard matches multiple terms of varying 
frequencies each in the document.  We should add such a test.

I definitely like how FieldHighlighter is simpler.  You can go further and 
remove FieldHighlighter.EMPTY too -- Rob had used that to simplify the queue 
initial state logic that is now obsoleted with your change (you chose a boolean 
"started" flag instead).

It's a shame Passage is now fairly heavy with a BytesRefHash on it.  I want to 
think about that a bit more.

The first place maybeAddPassage is called, it's guarded by an if condition. But 
that if condition can be removed as it is redundant with the same logic that 
maybeAddPassage starts with.  You should copy along the comment that explains 
what -1 means.

Nitpicks:

FieldHighlighter:
* highlightOffsetsEnums:  Add braces to the "if" block that returns early.  The 
"while" of do-while should be on the same line as the close bracket.
* maybeAddPassage: passage ought to be the first parameter IMO.  And add braces 
to the "if" block.

FieldOffsetStrategy: the javadocs on getOffsetsEnum should not say "remember to 
close them *all*" since it just returns one.  so maybe "remember to close it"

MultiOffsetsEnum.close: I see it calls close on all the remaining OffsetsEnums 
on the queue... but at this point it's likely empty.  Based on our 
implementations of OffsetsEnum this won't be an issue but I think it's bad to 
leave it this way.  I think nextPosition could be modified to close the "top" 
item when it reaches the end.  close would then have a comment to mentioned the 
others have been closed already in nextPosition.

TestUnifiedHighlighterExtensibility: you removed calling p.setScore but I think 
we want to ensure all such methods are exposed thus enabling someone to fully 
use this Passage.

> UnifiedHighlighter should use single OffsetEnum rather than List
> 
>
> Key: LUCENE-8145
> URL: https://issues.apache.org/jira/browse/LUCENE-8145
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-8145.patch
>
>
> The UnifiedHighlighter deals with several different aspects of highlighting: 
> finding highlight offsets, breaking content up into snippets, and passage 
> scoring.  It would be nice to split this up so that consumers can use them 
> separately.
> As a first step, I'd like to change the API of FieldOffsetStrategy to return 
> a single unified OffsetsEnum, rather than a collection of them.  This will 
> make it easier to expose the OffsetsEnum of a document directly from the 
> highlighter, bypassing snippet extraction and scoring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11715) Spatial Search ref-guide fixes

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11715:
-
Component/s: documentation

> Spatial Search ref-guide fixes
> --
>
> Key: SOLR-11715
> URL: https://issues.apache.org/jira/browse/SOLR-11715
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-11715.patch, SOLR-11715.patch
>
>
> Was doing a demo of spatial search and ran into the following problems:
> # Some blocks of code were not formatted properly, hence losing the {{*:*}} 
> parameters. Example: 
> https://lucene.apache.org/solr/guide/6_6/spatial-search.html#SpatialSearch-bbox
> # The query mentioned for geodist section didn't actually return the scores. 
> Need to add an fl parameter there. 
> https://lucene.apache.org/solr/guide/6_6/spatial-search.html#SpatialSearch-geodist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11732) Solr 5.5,6.6 spellchecker does not return the same response in single character query cases

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11732:
-
Affects Version/s: 7.2

> Solr 5.5,6.6 spellchecker does not return the same response in single 
> character query cases
> ---
>
> Key: SOLR-11732
> URL: https://issues.apache.org/jira/browse/SOLR-11732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 5.5, 6.6, 7.2
>Reporter: Evan Fagerberg
>Priority: Minor
> Attachments: Screen Shot 2017-12-06 at 7.03.24 PM.png, Screen Shot 
> 2017-12-06 at 7.09.33 PM.png
>
>
> When running the getting started example of solr 5.5 and 6.6 I noticed a 
> peculiar behavior that occurs for single character queries under the /spell 
> requestHandler
> When searcher for a single term the response from solr does not have a spell 
> check section compared to a two character query which does.
> This seems to be independent from things like minPrefix (1 by default) and 
> minQueryLength (4 by default).
> I would expect that any number of character would return a spellcheck section.
> I first came across this when trying to upgrade a solr plugin that had tests 
> using single queries to assert suggestion counts.
> I have included screenshots of the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11732) Solr 5.5,6.6 spellchecker does not return the same response in single character query cases

2018-01-31 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347428#comment-16347428
 ] 

Cassandra Targett commented on SOLR-11732:
--

I can reproduce this with 7.2, so it's not only a problem with 5.x and 6.x.

> Solr 5.5,6.6 spellchecker does not return the same response in single 
> character query cases
> ---
>
> Key: SOLR-11732
> URL: https://issues.apache.org/jira/browse/SOLR-11732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 5.5, 6.6, 7.2
>Reporter: Evan Fagerberg
>Priority: Minor
> Attachments: Screen Shot 2017-12-06 at 7.03.24 PM.png, Screen Shot 
> 2017-12-06 at 7.09.33 PM.png
>
>
> When running the getting started example of solr 5.5 and 6.6 I noticed a 
> peculiar behavior that occurs for single character queries under the /spell 
> requestHandler
> When searcher for a single term the response from solr does not have a spell 
> check section compared to a two character query which does.
> This seems to be independent from things like minPrefix (1 by default) and 
> minQueryLength (4 by default).
> I would expect that any number of character would return a spellcheck section.
> I first came across this when trying to upgrade a solr plugin that had tests 
> using single queries to assert suggestion counts.
> I have included screenshots of the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11739) Solr can accept duplicated async IDs

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11739:
-
Component/s: SolrCloud

> Solr can accept duplicated async IDs
> 
>
> Key: SOLR-11739
> URL: https://issues.apache.org/jira/browse/SOLR-11739
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-11739.patch, SOLR-11739.patch
>
>
> Solr is supposed to reject duplicated async IDs, however, if the repeated IDs 
> are sent fast enough, a race condition in Solr will let the repeated IDs 
> through. The duplicated task is ran and and then silently fails to report as 
> completed because the same async ID is already in the completed map. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11761) Query parsing with comments fail in org.apache.solr.parser.QueryParser

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11761:
-
Component/s: query parsers

> Query parsing with comments fail in org.apache.solr.parser.QueryParser
> --
>
> Key: SOLR-11761
> URL: https://issues.apache.org/jira/browse/SOLR-11761
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 6.2.1, master (8.0), 6.6.2
> Environment: Java 1.8
> Reproduced issue on org.apache.solr:solr-core:6.2.1 and 6.6.2
>Reporter: Andreas Presthammer
>Priority: Major
> Attachments: SOLR-11761.patch
>
>
> Repro:
> org.apache.solr.parser.QueryParser queryParser = ...
> queryParser.parse("/* foo */ bar"); // works fine
> queryParser.parse("/*"); // fails with SyntaxError, which is correct.
> queryParser.parse("/* foo */ bar"); // Fails with SyntaxError. This is the bug
> queryParser.parse("bar"); // works fine
> queryParser.parse("/* foo */ bar"); // Still failing with SyntaxError
> The last parse call will continue to fail for expressions containing 
> comments. Only way to work around that I've found it to create a new instance 
> of QueryParser.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11770) NPE in tvrh if no field is specified and document doesn't contain any fields with term vectors

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11770:
-
Component/s: SearchComponents - other

> NPE in tvrh if no field is specified and document doesn't contain any fields 
> with term vectors
> --
>
> Key: SOLR-11770
> URL: https://issues.apache.org/jira/browse/SOLR-11770
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.6.2
>Reporter: Nikolay Martynov
>Assignee: Erick Erickson
>Priority: Major
>
> It looks like if {{tvrh}} request doesn't contain {{fl}} parameter and 
> document doesn't have any fields with term vectors then Solr returns NPE.
> Request: 
> {{tvrh?shards.qt=/tvrh=field%3Avalue=json=id%3A123=true}}.
> On our 'old' schema we had some fields with {{termVectors}} and even more 
> fields with position data. In our new schema we tried to remove unused data 
> so we dropped a lot of position data and some term vectors.
> Our documents are 'sparsely' populated - not all documents contain all fields.
> Above request was returning fine for our 'old' schema and returns 500 for our 
> 'new' schema - on exactly same Solr (6.6.2).
> Stack trace:
> {code}
> 2017-12-18 01:15:00.958 ERROR (qtp255041198-46697) [c:test s:shard3 
> r:core_node11 x:test_shard3_replica1] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>at 
> org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:324)
>at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)
>at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
>at org.apache.solr.core.SolrCore.execute(SolrCore.java:2482)
>at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
>at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>at org.eclipse.jetty.server.Server.handle(Server.java:534)
>at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>at 

[jira] [Updated] (SOLR-11776) HttpSolrClient should handle SocketException:broken pipe

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11776:
-
Component/s: SolrCloud

> HttpSolrClient should handle SocketException:broken pipe
> 
>
> Key: SOLR-11776
> URL: https://issues.apache.org/jira/browse/SOLR-11776
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Cao Manh Dat
>Priority: Major
>
> There is a case when leader sends an update to replica but met 
> {{SocketException: broken pipe}} ( HttpPartitionTest ), the leader treat this 
> exception as others SocketException and put replica into LIR mode. But this 
> exception means an existing connection between leader and replica is closed 
> on one side. I think we should handle this exception in a different way ( ex: 
> reconnect ) so leader won't blindly put replica into LIR even when the 
> connection between leader and replica is healthy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11775) json.facet can use inconsistent Long/Integer for "count" depending on shard count

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11775:
-
Component/s: Facet Module

> json.facet can use inconsistent Long/Integer for "count" depending on shard 
> count
> -
>
> Key: SOLR-11775
> URL: https://issues.apache.org/jira/browse/SOLR-11775
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Hoss Man
>Priority: Major
>
> (NOTE: I noticed this while working on a test for {{type: range}} but it's 
> possible other facet types may be affected as well)
> When dealing with a single core request -- either standalone or a collection 
> with only one shard -- json.facet seems to use "Integer" objects to return 
> the "count" of facet buckets, however if the shard count is increased then 
> the end client gets a "Long" object for the "count"
> (This isn't noticable when using {{wt=json}} but can be very problematic when 
> trying to write client code using {{wt=xml}} or SolrJ



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11788) conjunction of filterQueries not working properly

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-11788.
--
Resolution: Incomplete

Reporter is suggested to use the Solr-User mailing list first and only file an 
issue if a bug has been determined. Information on our mailing lists is at 
https://lucene.apache.org/solr/community.html#mailing-lists-irc.

> conjunction of filterQueries not working properly
> -
>
> Key: SOLR-11788
> URL: https://issues.apache.org/jira/browse/SOLR-11788
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.4.1
>Reporter: Patrick Klampfl
>Priority: Major
>
> those two filter queries work fine individually and lead to the same result:
> 1. all brands of products with application 50
> http://localhost:8983/solr/master_wac_WacBrandClassificationClass_flop/select?q=*:*={!join%20from=brands_string_mv%20to=code_string%20fromIndex=master_wac_Product_flop}applications_string_mv:50
> 2. all brands of products with category 100
> http://localhost:8983/solr/master_wac_WacBrandClassificationClass_flop/select?q=*:*={!join%20from=brands_string_mv%20to=code_string%20fromIndex=master_wac_Product_flop}category_string_mv:100
> although 1 and 2 have the exact same result set, the conjunction of both does 
> not work
> 3. combination:
> http://localhost:8983/solr/master_wac_WacBrandClassificationClass_flop/select?q=*:*={!join%20from=brands_string_mv%20to=code_string%20fromIndex=master_wac_Product_flop}applications_string_mv:50,{!join%20from=brands_string_mv%20to=code_string%20fromIndex=master_wac_Product_flop}category_string_mv:100
> Result for 1:
> {code:java}
> 
> 
> 0 name="QTime">0*:* name="fq">{!join from=brands_string_mv to=code_string 
> fromIndex=master_wac_Product_flop}applications_string_mv:50  name="response" numFound="1" start="0"> name="indexOperationId">99207272441184304 name="id">wackerMarketingClassificationCatalog/Default/36 name="pk">8796094169230 name="catalogId">wackerMarketingClassificationCatalog name="catalogVersion">Default name="itemtype_string">WacBrandClassificationClass name="code_string">36ELASTOSIL ® 
> WTELASTOSIL ® 
> WTELASTOSIL ® WT name="spellcheck_fr">ELASTOSIL ® WT name="name_text_en">ELASTOSIL ® WT name="name_sortable_en_sortabletext">ELASTOSIL ® WT name="autosuggest_en">ELASTOSIL ® WT name="spellcheck_en">ELASTOSIL ® WT name="name_text_de">ELASTOSIL ® WT name="name_sortable_de_sortabletext">ELASTOSIL ® WT name="autosuggest_de">ELASTOSIL ® WT name="spellcheck_de">ELASTOSIL ® WT name="indexedType_string">WacBrandClassificationClass name="_version_">1587316359348355073
> 
> {code}
> Result for 2:
> {code:java}
> 
> 
> 0 name="QTime">0*:* name="fq">{!join from=brands_string_mv to=code_string 
> fromIndex=master_wac_Product_flop}category_string_mv:100  name="response" numFound="1" start="0"> name="indexOperationId">99207272441184304 name="id">wackerMarketingClassificationCatalog/Default/36 name="pk">8796094169230 name="catalogId">wackerMarketingClassificationCatalog name="catalogVersion">Default name="itemtype_string">WacBrandClassificationClass name="code_string">36ELASTOSIL ® 
> WTELASTOSIL ® 
> WTELASTOSIL ® WT name="spellcheck_fr">ELASTOSIL ® WT name="name_text_en">ELASTOSIL ® WT name="name_sortable_en_sortabletext">ELASTOSIL ® WT name="autosuggest_en">ELASTOSIL ® WT name="spellcheck_en">ELASTOSIL ® WT name="name_text_de">ELASTOSIL ® WT name="name_sortable_de_sortabletext">ELASTOSIL ® WT name="autosuggest_de">ELASTOSIL ® WT name="spellcheck_de">ELASTOSIL ® WT name="indexedType_string">WacBrandClassificationClass name="_version_">1587316359348355073
> 
> {code}
> Result for 3 (combination of fq 1 and 2):
> {code:java}
> 
> 
> 0 name="QTime">0*:* name="fq">{!join from=brands_string_mv to=code_string 
> fromIndex=master_wac_Product_flop}applications_string_mv:50,{!join 
> from=brands_string_mv to=code_string 
> fromIndex=master_wac_Product_flop}category_string_mv:100  name="response" numFound="0" start="0">
> 
> {code}
> this filterquery however works (and leads to the same result as 1 and 2):
> http://localhost:8983/solr/master_wac_WacBrandClassificationClass_flop/select?q=*:*={!join%20from=brands_string_mv%20to=code_string%20fromIndex=master_wac_Product_flop}category_string_mv:100
>  AND applications_string_mv:50



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11787) StackOverflowError in leader election

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11787:
-
Component/s: SolrCloud

> StackOverflowError in leader election
> -
>
> Key: SOLR-11787
> URL: https://issues.apache.org/jira/browse/SOLR-11787
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Priority: Major
> Fix For: master (8.0)
>
>
> Just got this:
> {code}
> Exception in thread "coreZkRegister-1-thread-1" Exception in thread 
> "coreZkRegister-1-thread-3" java.lang.StackOverflowError
> at java.text.DecimalFormat.subformat(DecimalFormat.java:1639)
> at java.text.DecimalFormat.format(DecimalFormat.java:712)
> at java.text.DecimalFormat.format(DecimalFormat.java:646)
> at 
> java.text.SimpleDateFormat.zeroPaddingNumber(SimpleDateFormat.java:1393)
> at java.text.SimpleDateFormat.subFormat(SimpleDateFormat.java:1332)
> at java.text.SimpleDateFormat.format(SimpleDateFormat.java:966)
> at java.text.SimpleDateFormat.format(SimpleDateFormat.java:936)
> at 
> org.apache.log4j.pattern.DatePatternConverter$DefaultZoneDateFormat.format(DatePatternConverter.java:96)
> at java.text.DateFormat.format(DateFormat.java:345)
> at 
> org.apache.log4j.pattern.CachedDateFormat.format(CachedDateFormat.java:283)
> at 
> org.apache.log4j.pattern.DatePatternConverter.format(DatePatternConverter.java:180)
> at 
> org.apache.log4j.pattern.BridgePatternConverter.format(BridgePatternConverter.java:119)
> at 
> org.apache.log4j.EnhancedPatternLayout.format(EnhancedPatternLayout.java:546)
> at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
> at 
> org.apache.log4j.RollingFileAppender.subAppend(RollingFileAppender.java:276)
> at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
> at 
> org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
> at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
> at org.apache.log4j.Category.callAppenders(Category.java:206)
> at org.apache.log4j.Category.forcedLog(Category.java:391)
> at org.apache.log4j.Category.log(Category.java:856)
> at org.slf4j.impl.Log4jLoggerAdapter.warn(Log4jLoggerAdapter.java:478)
> at org.apache.solr.update.PeerSync.handleResponse(PeerSync.java:485)
> at org.apache.solr.update.PeerSync.sync(PeerSync.java:348)
> at 
> org.apache.solr.cloud.SyncStrategy.syncWithReplicas(SyncStrategy.java:180)
> at 
> org.apache.solr.cloud.SyncStrategy.syncReplicas(SyncStrategy.java:129)
> at org.apache.solr.cloud.SyncStrategy.sync(SyncStrategy.java:108)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:386)
> at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:216)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:729)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:435)
> at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:216)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:729)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:435)
> at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170)
> at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307)
> at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:216)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:729)
> at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:435)
> 
> {code}
> Never 

[jira] [Updated] (SOLR-11788) conjunction of filterQueries not working properly

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11788:
-
Component/s: search

> conjunction of filterQueries not working properly
> -
>
> Key: SOLR-11788
> URL: https://issues.apache.org/jira/browse/SOLR-11788
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.4.1
>Reporter: Patrick Klampfl
>Priority: Major
>
> those two filter queries work fine individually and lead to the same result:
> 1. all brands of products with application 50
> http://localhost:8983/solr/master_wac_WacBrandClassificationClass_flop/select?q=*:*={!join%20from=brands_string_mv%20to=code_string%20fromIndex=master_wac_Product_flop}applications_string_mv:50
> 2. all brands of products with category 100
> http://localhost:8983/solr/master_wac_WacBrandClassificationClass_flop/select?q=*:*={!join%20from=brands_string_mv%20to=code_string%20fromIndex=master_wac_Product_flop}category_string_mv:100
> although 1 and 2 have the exact same result set, the conjunction of both does 
> not work
> 3. combination:
> http://localhost:8983/solr/master_wac_WacBrandClassificationClass_flop/select?q=*:*={!join%20from=brands_string_mv%20to=code_string%20fromIndex=master_wac_Product_flop}applications_string_mv:50,{!join%20from=brands_string_mv%20to=code_string%20fromIndex=master_wac_Product_flop}category_string_mv:100
> Result for 1:
> {code:java}
> 
> 
> 0 name="QTime">0*:* name="fq">{!join from=brands_string_mv to=code_string 
> fromIndex=master_wac_Product_flop}applications_string_mv:50  name="response" numFound="1" start="0"> name="indexOperationId">99207272441184304 name="id">wackerMarketingClassificationCatalog/Default/36 name="pk">8796094169230 name="catalogId">wackerMarketingClassificationCatalog name="catalogVersion">Default name="itemtype_string">WacBrandClassificationClass name="code_string">36ELASTOSIL ® 
> WTELASTOSIL ® 
> WTELASTOSIL ® WT name="spellcheck_fr">ELASTOSIL ® WT name="name_text_en">ELASTOSIL ® WT name="name_sortable_en_sortabletext">ELASTOSIL ® WT name="autosuggest_en">ELASTOSIL ® WT name="spellcheck_en">ELASTOSIL ® WT name="name_text_de">ELASTOSIL ® WT name="name_sortable_de_sortabletext">ELASTOSIL ® WT name="autosuggest_de">ELASTOSIL ® WT name="spellcheck_de">ELASTOSIL ® WT name="indexedType_string">WacBrandClassificationClass name="_version_">1587316359348355073
> 
> {code}
> Result for 2:
> {code:java}
> 
> 
> 0 name="QTime">0*:* name="fq">{!join from=brands_string_mv to=code_string 
> fromIndex=master_wac_Product_flop}category_string_mv:100  name="response" numFound="1" start="0"> name="indexOperationId">99207272441184304 name="id">wackerMarketingClassificationCatalog/Default/36 name="pk">8796094169230 name="catalogId">wackerMarketingClassificationCatalog name="catalogVersion">Default name="itemtype_string">WacBrandClassificationClass name="code_string">36ELASTOSIL ® 
> WTELASTOSIL ® 
> WTELASTOSIL ® WT name="spellcheck_fr">ELASTOSIL ® WT name="name_text_en">ELASTOSIL ® WT name="name_sortable_en_sortabletext">ELASTOSIL ® WT name="autosuggest_en">ELASTOSIL ® WT name="spellcheck_en">ELASTOSIL ® WT name="name_text_de">ELASTOSIL ® WT name="name_sortable_de_sortabletext">ELASTOSIL ® WT name="autosuggest_de">ELASTOSIL ® WT name="spellcheck_de">ELASTOSIL ® WT name="indexedType_string">WacBrandClassificationClass name="_version_">1587316359348355073
> 
> {code}
> Result for 3 (combination of fq 1 and 2):
> {code:java}
> 
> 
> 0 name="QTime">0*:* name="fq">{!join from=brands_string_mv to=code_string 
> fromIndex=master_wac_Product_flop}applications_string_mv:50,{!join 
> from=brands_string_mv to=code_string 
> fromIndex=master_wac_Product_flop}category_string_mv:100  name="response" numFound="0" start="0">
> 
> {code}
> this filterquery however works (and leads to the same result as 1 and 2):
> http://localhost:8983/solr/master_wac_WacBrandClassificationClass_flop/select?q=*:*={!join%20from=brands_string_mv%20to=code_string%20fromIndex=master_wac_Product_flop}category_string_mv:100
>  AND applications_string_mv:50



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11792) tvrh component doesn't work if unique key has stored="false"

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11792:
-
Component/s: SearchComponents - other

> tvrh component doesn't work if unique key has stored="false"
> 
>
> Key: SOLR-11792
> URL: https://issues.apache.org/jira/browse/SOLR-11792
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.6.2
>Reporter: Nikolay Martynov
>Priority: Major
>
> If I create index with unique key defined like
> {code}
>  docValues="true"/>
> {code}
> then searches seem to be working, but {{tvrh}} doesn't return any vectors for 
> fields that have one stored.
> Upon a cursory look at the code it looks like {{tvrh}} component requires 
> unique key to be specifically stored.
> Ideally {{tvrh}} should work fine with docValues. And at the very least this 
> gotcha should be documented, probably here: 
> https://lucene.apache.org/solr/guide/6_6/field-properties-by-use-case.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11799) Fix NPE and class cast exceptions in the TimeSeriesStream

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11799:
-
Component/s: streaming expressions

> Fix NPE and class cast exceptions in the TimeSeriesStream
> -
>
> Key: SOLR-11799
> URL: https://issues.apache.org/jira/browse/SOLR-11799
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-11799.patch
>
>
> Currently the timeseries Streaming Expression will throw an NPE if there are 
> no results for a bucket and any function other then count(*) is used. It can 
> also throw class cast exceptions if the JSON facet API returns a long for any 
> function (other then count(*)), as it is always expecting a double.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11826) TestPKIAuthenticationPlugin NPE

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11826:
-
Component/s: Authentication

> TestPKIAuthenticationPlugin NPE
> ---
>
> Key: SOLR-11826
> URL: https://issues.apache.org/jira/browse/SOLR-11826
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Reporter: Varun Thacker
>Priority: Major
> Attachments: jenkins_build19_72.log
>
>
> I see Jenkins failures for TestPKIAuthenticationPlugin.
> SOLR-9401 we addressed a few scenarios but maybe there are some more cases 
> where we run into this.
> I'll upload logs from one such scenario for reference, but the test fails 
> quite regularly . 
> This link is still active 
> https://builds.apache.org/job/Lucene-Solr-Tests-7.2/19/ and I've uploaded the 
> logs from this test scenario for reference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11827) MockAuthorizationPlugin should return 401 if no principal is specified

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11827:
-
Component/s: Authentication

> MockAuthorizationPlugin should return 401 if no principal is specified
> --
>
> Key: SOLR-11827
> URL: https://issues.apache.org/jira/browse/SOLR-11827
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Reporter: Varun Thacker
>Priority: Major
>
> Let's say today if the leader sends a message to the replica and it takes 
> more than 10s ( the default TTL timeout ) then PKIAuthenticationPlugin will 
> not pass the principal and RuleBasedAuthorizationPlugin will notice this and 
> throw a 401
> {code:title=PKIAuthenticationPlugin.java|borderStyle=solid}
> if ((receivedTime - decipher.timestamp) > MAX_VALIDITY) {
> log.error("Invalid key request timestamp: {} , received timestamp: {} 
> , TTL: {}", decipher.timestamp, receivedTime, MAX_VALIDITY);
> filterChain.doFilter(request, response);
> return true;
> }
> {code}
> {code:title=RuleBasedAuthorizationPlugin.java|borderStyle=solid}
> if (principal == null) {
> log.info("request has come without principal. failed permission {} 
> ",permission);
> //this resource needs a principal but the request has come without
> //any credential.
> return MatchStatus.USER_REQUIRED;
>   }
> {code}
> I was trying to verify this with PKIAuthenticationIntegrationTest but I 
> noticed that since this test uses MockAuthorizationPlugin where no principal 
> is treated as a 200 the test won't fail.
> So we should enhance MockAuthorizationPlugin to treat no principal as a 401 
> and add a test in PKIAuthenticationIntegrationTest to verify the behaviour



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11835) Adjust instructions for Ukrainian on LanguageAnalysis page

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-11835.
--
Resolution: Fixed

> Adjust instructions for Ukrainian on LanguageAnalysis page
> --
>
> Key: SOLR-11835
> URL: https://issues.apache.org/jira/browse/SOLR-11835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Andriy Rysin
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.3
>
>
> Since Lucene 6.6 the dictionary for Ukrainian analyzer contains all proper 
> names in lowercase, this seems to be much better way to have it for searching.
> Can we please move LowerCaseFilterFactory back before MorfologikFilterFactory 
> at 
> https://lucene.apache.org/solr/guide/6_6/language-analysis.html#LanguageAnalysis-Ukrainian?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11835) Adjust instructions for Ukrainian on LanguageAnalysis page

2018-01-31 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347391#comment-16347391
 ] 

ASF subversion and git services commented on SOLR-11835:


Commit 70a9e5b3f58d06f1a7616a9b2c7f9681a7ecd5eb in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=70a9e5b ]

SOLR-11835: Adjust Ukranian language example


> Adjust instructions for Ukrainian on LanguageAnalysis page
> --
>
> Key: SOLR-11835
> URL: https://issues.apache.org/jira/browse/SOLR-11835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Andriy Rysin
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.3
>
>
> Since Lucene 6.6 the dictionary for Ukrainian analyzer contains all proper 
> names in lowercase, this seems to be much better way to have it for searching.
> Can we please move LowerCaseFilterFactory back before MorfologikFilterFactory 
> at 
> https://lucene.apache.org/solr/guide/6_6/language-analysis.html#LanguageAnalysis-Ukrainian?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11835) Adjust instructions for Ukrainian on LanguageAnalysis page

2018-01-31 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347387#comment-16347387
 ] 

Cassandra Targett commented on SOLR-11835:
--

I changed the example to:

{code}

  
  
  
  

{code}

And then removed the next paragraph which explained why LowerCaseFilterFactory 
was placed after MorfologikFilterFactory. The paragraph after that remains the 
same.

> Adjust instructions for Ukrainian on LanguageAnalysis page
> --
>
> Key: SOLR-11835
> URL: https://issues.apache.org/jira/browse/SOLR-11835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Andriy Rysin
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.3
>
>
> Since Lucene 6.6 the dictionary for Ukrainian analyzer contains all proper 
> names in lowercase, this seems to be much better way to have it for searching.
> Can we please move LowerCaseFilterFactory back before MorfologikFilterFactory 
> at 
> https://lucene.apache.org/solr/guide/6_6/language-analysis.html#LanguageAnalysis-Ukrainian?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11835) Adjust instructions for Ukrainian on LanguageAnalysis page

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11835:
-
Fix Version/s: 7.3

> Adjust instructions for Ukrainian on LanguageAnalysis page
> --
>
> Key: SOLR-11835
> URL: https://issues.apache.org/jira/browse/SOLR-11835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Andriy Rysin
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.3
>
>
> Since Lucene 6.6 the dictionary for Ukrainian analyzer contains all proper 
> names in lowercase, this seems to be much better way to have it for searching.
> Can we please move LowerCaseFilterFactory back before MorfologikFilterFactory 
> at 
> https://lucene.apache.org/solr/guide/6_6/language-analysis.html#LanguageAnalysis-Ukrainian?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11835) Adjust instructions for Ukrainian on LanguageAnalysis page

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett reassigned SOLR-11835:


Assignee: Cassandra Targett

> Adjust instructions for Ukrainian on LanguageAnalysis page
> --
>
> Key: SOLR-11835
> URL: https://issues.apache.org/jira/browse/SOLR-11835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Andriy Rysin
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.3
>
>
> Since Lucene 6.6 the dictionary for Ukrainian analyzer contains all proper 
> names in lowercase, this seems to be much better way to have it for searching.
> Can we please move LowerCaseFilterFactory back before MorfologikFilterFactory 
> at 
> https://lucene.apache.org/solr/guide/6_6/language-analysis.html#LanguageAnalysis-Ukrainian?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11845) OverseerTaskQueue throws Exception when the response node does not exist

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11845:
-
Component/s: SolrCloud

> OverseerTaskQueue throws Exception when the response node does not exist
> 
>
> Key: SOLR-11845
> URL: https://issues.apache.org/jira/browse/SOLR-11845
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Noble Paul
>Priority: Minor
>
> {code}
> 2017-12-15 13:00:40.789 ERROR 
> (OverseerCollectionConfigSetProcessor-1540306989440696326-vlpijengs305:8983_solr-n_04)
>  [   ] o.a.s.c.OverseerTaskProcessor 
> :org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /overseer/collection-queue-work/qnr-000832
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1327)
> at 
> org.apache.solr.common.cloud.SolrZkClient$8.execute(SolrZkClient.java:374)
> at 
> org.apache.solr.common.cloud.SolrZkClient$8.execute(SolrZkClient.java:371)
> at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
> at 
> org.apache.solr.common.cloud.SolrZkClient.setData(SolrZkClient.java:371)
> at 
> org.apache.solr.common.cloud.SolrZkClient.setData(SolrZkClient.java:572)
> at 
> org.apache.solr.cloud.OverseerTaskQueue.remove(OverseerTaskQueue.java:94)
> at 
> org.apache.solr.cloud.OverseerTaskProcessor.cleanUpWorkQueue(OverseerTaskProcessor.java:321)
> at 
> org.apache.solr.cloud.OverseerTaskProcessor.run(OverseerTaskProcessor.java:202)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> This should not throw an exception. If the response ZK node does not exist 
> there is no need to throw an Exception back 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11858) NPE in DirectSpellChecker

2018-01-31 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347370#comment-16347370
 ] 

Cassandra Targett commented on SOLR-11858:
--

I'm not able to reproduce this with either 7.1 or 7.2, but not sure if I'm 
following your setup. Here's what I did:

# Started Solr with {{bin/solr -e techproducts}}. This:
## Creates a collection with the {{sample_techproducts_config}} with spellcheck 
defaults that seem to match with what  you used
## Indexes exampledocs
# Then I did a query for "aple" to the /spell requestHandler: 
http://localhost:8654/solr/techproducts/spell?q=aple
# I got 1 result as expected for that dataset ("apple").

Can you correct these steps or add any additional information to try to 
reproduce this?

> NPE in DirectSpellChecker
> -
>
> Key: SOLR-11858
> URL: https://issues.apache.org/jira/browse/SOLR-11858
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Priority: Major
> Fix For: master (8.0), 7.3
>
>
> We just came across the following NPE. It seems this NPE only appears when 
> the query is incorrectly spelled but response has more than 0 results. We 
> have not observed this on other 7.1.0 deployments. 
> {code}
> 2018-01-16 09:15:00.009 ERROR (qtp329611835-19) [c] o.a.s.h.RequestHand
> lerBase java.lang.NullPointerException
>  at 
> org.apache.lucene.search.spell.DirectSpellChecker.suggestSimilar(DirectSpellChecker.java:421)
>  at 
> org.apache.lucene.search.spell.DirectSpellChecker.suggestSimilar(DirectSpellChecker.java:353)
>  at 
> org.apache.solr.spelling.DirectSolrSpellChecker.getSuggestions(DirectSolrSpellChecker.java:186)
>  at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:195)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2484)
>  at 
> {code}
> Config:
> {code}
>  
> text_general
> 
>   default
>   spellcheck
>   solr.DirectSolrSpellChecker
>   internal
>   0.5
>   2
>   1
>   5
>   4
>   0.01
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11884) find/fix inefficiencies in our use of logging

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11884:
-
Component/s: logging

> find/fix inefficiencies in our use of logging
> -
>
> Key: SOLR-11884
> URL: https://issues.apache.org/jira/browse/SOLR-11884
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> We've been looking at Solr using Flight Recorder and ran across some 
> interesting things I'd like to discuss. Let's discuss general logging 
> approaches here, then perhaps break out sub-JIRAs when we reach any kind of 
> agreement.
> 1> Every log message generates a new Throwable, presumably to get things like 
> line number, file, class name and the like. On a 2 minute run blasting 
> updates this meant 150,000 (yes, 150K) instances of "new Throwable()".
>  
> See the section "Asynchronous Logging with Caller Location Information" at:
> [https://logging.apache.org/log4j/2.x/performance.html]
> I'm not totally sure changing the layout pattern will fix this in log4j 1.x, 
> but apparently certainly should in log4j 2.
>  
> The cost of course would be that lots of our log messages would lack some of 
> the information. Exceptions would still contain all the file/class/line 
> information of course.
>  
> Proposal:
> Change the layout pattern to, by default, _NOT_  include information that 
> requires a Throwable to be created. Also include a pattern that could be 
> un-commented to get this information back for troubleshooting.
>  
> 
>  
> We generate strings when we don't need them. Any construct like
> log.info("whatever " + method_that_builds_a_string + " : " + some_variable);
> generates the string (some of which are quite expensive) and then throws it 
> away if the log level is at, say, WARN. The above link also shows that 
> parameterizing this doesn't suffer this problem, so anything like the above 
> should be re-written as:
> log.info("whatever {} : {} ", method_that_builds_a_string, some_variable);
>  
> The alternative is to do something like but let's make use of the built-in 
> capabilities instead.
> if (log.level >= INFO) {
>    log.info("whatever " + method_that_builds_a_string + " : " + 
> some_variable);
> }
> etc.
> This would be a pretty huge thing to fix all-at-once so I suppose we'll have 
> to approach it incrementally. It's also something that, if we get them all 
> out of the code should be added to precommit failures. In the meantime, if 
> anyone who has the precommit chops could create a target that checked for 
> this it'd be a great help in tracking all of them down, then could be 
> incorporated in the regular precommit checks if/when they're all removed.
> Proposal:
> Use JFR or whatever to identify the egregious violations of this kind of 
> thing (I have a couple I've found) and change them to parameterized form (and 
> prove it works). Then see what we can do to move forward with removing them 
> all through the code base.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11916) new SortableTextField using docValues built from the original string input

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11916:
-
Component/s: Schema and Analysis

> new SortableTextField using docValues built from the original string input
> --
>
> Key: SOLR-11916
> URL: https://issues.apache.org/jira/browse/SOLR-11916
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-11916.patch
>
>
> I propose adding a new SortableTextField subclass that would functionally 
> work the same as TextField except:
>  * {{docValues="true|false"}} could be configured, with the default being 
> "true"
>  * The docValues would contain the original input values (just like StrField) 
> for sorting (or faceting)
>  ** By default, to protect users from excessively large docValues, only the 
> first 1024 of each field value would be used – but this could be overridden 
> with configuration.
> 
> Consider the following sample configuration:
> {code:java}
> indexed="true" docValues="true" stored="true" multiValued="false"/>
> 
>   
>...
>   
>   
>...
>   
> 
> {code}
> Given a document with a title of "Solr In Action"
> Users could:
>  * Search for individual (indexed) terms in the "title" field: 
> {{q=title:solr}}
>  * Sort documents by title ( {{sort=title asc}} ) such that this document's 
> sort value would be "Solr In Action"
> If another document had a "title" value that was longer then 1024 chars, then 
> the docValues would be built using only the first 1024 characters of the 
> value (unless the user modified the configuration)
> This would be functionally equivalent to the following existing configuration 
> - including the on disk index segments - except that the on disk DocValues 
> would refer directly to the "title" field, reducing the total number of 
> "field infos" in the index (which has a small impact on segment housekeeping 
> and merge times) and end users would not need to sort on an alternate 
> "title_string" field name - the original "title" field name would always be 
> used directly.
> {code:java}
> indexed="true" docValues="true" stored="true" multiValued="false"/>
> indexed="false" docValues="true" stored="false" multiValued="false"/>
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11917) A Potential Roadmap for robust multi-analyzer TextFields w/various options for configuring docValues

2018-01-31 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11917:
-
Component/s: Schema and Analysis

> A Potential Roadmap for robust multi-analyzer TextFields w/various options 
> for configuring docValues
> 
>
> Key: SOLR-11917
> URL: https://issues.apache.org/jira/browse/SOLR-11917
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> A while back, I was tasked at my day job to brainstorm & design some "smarter 
> field types" in Solr. In particular to think about:
>  # How to simplify some of the "special things" people have to know about 
> Solr behavior when creating their schemas
>  # How to reduce the number of situations where users have to copy/clone one 
> "logical field" into multiple "schema felds in order to meet diff use cases
> The main result of this thought excercise is a handful of usecases/goals that 
> people seem to have - many of which are already tracked in existing jiras - 
> along with a high level design/roadmap of potential solutions for these goals 
> that can be implemented incrementally to leverage some common changes (and 
> what those changes might look like).
> My intention is to use this jira as a place to share these ideas for broader 
> community discussion, and as a central linkage point for the related jiras. 
> (details to follow in a very looong comment)
> 
> NOTE: I am not (at this point) personally committing to following through on 
> implementing every aspect of these ideas :)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8146) Unit tests using StringHelper fail with ExceptionInInitializerError for maven surefire >= 2.18

2018-01-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347241#comment-16347241
 ] 

Uwe Schindler edited comment on LUCENE-8146 at 1/31/18 5:36 PM:


Nevertheless, the issue should better be solved in Surefire (ideally) or if not 
possible in randomizedrunner's seed parsing.


was (Author: thetaphi):
Nevertheless, the issue should better be solved in Surefire (ideally) or if not 
possible in randomizedrunner's SeedProvider.

> Unit tests using StringHelper fail with ExceptionInInitializerError for maven 
> surefire >= 2.18
> --
>
> Key: LUCENE-8146
> URL: https://issues.apache.org/jira/browse/LUCENE-8146
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.2.1
>Reporter: Julien Massenet
>Priority: Minor
> Attachments: LUCENE-8146-seed_issue.tar.gz, LUCENE-8146_v1.patch
>
>
> This happens when multiple conditions are met:
>  * The client code is built with Maven
>  * To execute its unit tests, the client code relies on the 
> {{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
> version)
>  * The client code uses the {{org.apache.lucene.util.StringHelper}} class 
> (even transitively)
>  * The client is configured as with the standard Lucene maven build (i.e. it 
> is possible to fix the test seed using the {{tests.seed}} property)
> There was a change in Surefire's behavior starting with 2.18: when a property 
> is empty, instead of not sending it to the test runner, it will be sent with 
> an empty value.
> This behavior can be observed with the attached sample project:
>  * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
>  * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to 
> a real value
>  * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire 
> version is lower than 2.18
> Attached is a patch (built against \{{branch_7x}}) that centralizes accesses 
> to the {{tests.seed}} system property; it also makes sure that if it is 
> empty, it is treated as absent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8146) Unit tests using StringHelper fail with ExceptionInInitializerError for maven surefire >= 2.18

2018-01-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347241#comment-16347241
 ] 

Uwe Schindler commented on LUCENE-8146:
---

Nevertheless, the issue should better be solved in Surefire (ideally) or if not 
possible in randomizedrunner's SeedProvider.

> Unit tests using StringHelper fail with ExceptionInInitializerError for maven 
> surefire >= 2.18
> --
>
> Key: LUCENE-8146
> URL: https://issues.apache.org/jira/browse/LUCENE-8146
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.2.1
>Reporter: Julien Massenet
>Priority: Minor
> Attachments: LUCENE-8146-seed_issue.tar.gz, LUCENE-8146_v1.patch
>
>
> This happens when multiple conditions are met:
>  * The client code is built with Maven
>  * To execute its unit tests, the client code relies on the 
> {{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
> version)
>  * The client code uses the {{org.apache.lucene.util.StringHelper}} class 
> (even transitively)
>  * The client is configured as with the standard Lucene maven build (i.e. it 
> is possible to fix the test seed using the {{tests.seed}} property)
> There was a change in Surefire's behavior starting with 2.18: when a property 
> is empty, instead of not sending it to the test runner, it will be sent with 
> an empty value.
> This behavior can be observed with the attached sample project:
>  * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
>  * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to 
> a real value
>  * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire 
> version is lower than 2.18
> Attached is a patch (built against \{{branch_7x}}) that centralizes accesses 
> to the {{tests.seed}} system property; it also makes sure that if it is 
> empty, it is treated as absent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8146) Unit tests using StringHelper fail with ExceptionInInitializerError for maven surefire >= 2.18

2018-01-31 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347237#comment-16347237
 ] 

Uwe Schindler commented on LUCENE-8146:
---

Thanks' Dawid. That was my first idea: User Randomizedrunner in Maven instead 
of Surefire (like other projects do).

> Unit tests using StringHelper fail with ExceptionInInitializerError for maven 
> surefire >= 2.18
> --
>
> Key: LUCENE-8146
> URL: https://issues.apache.org/jira/browse/LUCENE-8146
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.2.1
>Reporter: Julien Massenet
>Priority: Minor
> Attachments: LUCENE-8146-seed_issue.tar.gz, LUCENE-8146_v1.patch
>
>
> This happens when multiple conditions are met:
>  * The client code is built with Maven
>  * To execute its unit tests, the client code relies on the 
> {{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
> version)
>  * The client code uses the {{org.apache.lucene.util.StringHelper}} class 
> (even transitively)
>  * The client is configured as with the standard Lucene maven build (i.e. it 
> is possible to fix the test seed using the {{tests.seed}} property)
> There was a change in Surefire's behavior starting with 2.18: when a property 
> is empty, instead of not sending it to the test runner, it will be sent with 
> an empty value.
> This behavior can be observed with the attached sample project:
>  * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
>  * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to 
> a real value
>  * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire 
> version is lower than 2.18
> Attached is a patch (built against \{{branch_7x}}) that centralizes accesses 
> to the {{tests.seed}} system property; it also makes sure that if it is 
> empty, it is treated as absent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8146) Unit tests using StringHelper fail with ExceptionInInitializerError for maven surefire >= 2.18

2018-01-31 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347235#comment-16347235
 ] 

Dawid Weiss commented on LUCENE-8146:
-

What's the full stack trace for this {{ExceptionInInitializerError}}?

> Unit tests using StringHelper fail with ExceptionInInitializerError for maven 
> surefire >= 2.18
> --
>
> Key: LUCENE-8146
> URL: https://issues.apache.org/jira/browse/LUCENE-8146
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.2.1
>Reporter: Julien Massenet
>Priority: Minor
> Attachments: LUCENE-8146-seed_issue.tar.gz, LUCENE-8146_v1.patch
>
>
> This happens when multiple conditions are met:
>  * The client code is built with Maven
>  * To execute its unit tests, the client code relies on the 
> {{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
> version)
>  * The client code uses the {{org.apache.lucene.util.StringHelper}} class 
> (even transitively)
>  * The client is configured as with the standard Lucene maven build (i.e. it 
> is possible to fix the test seed using the {{tests.seed}} property)
> There was a change in Surefire's behavior starting with 2.18: when a property 
> is empty, instead of not sending it to the test runner, it will be sent with 
> an empty value.
> This behavior can be observed with the attached sample project:
>  * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
>  * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to 
> a real value
>  * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire 
> version is lower than 2.18
> Attached is a patch (built against \{{branch_7x}}) that centralizes accesses 
> to the {{tests.seed}} system property; it also makes sure that if it is 
> empty, it is treated as absent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8146) Unit tests using StringHelper fail with ExceptionInInitializerError for maven surefire >= 2.18

2018-01-31 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347232#comment-16347232
 ] 

Dawid Weiss commented on LUCENE-8146:
-

{code}
+  /**
+   * Get a Random instance, seeded with the external seed if available.
+   */
+  public static Random getSeededRandom() {
+String seed = SeedProvider.getExternalSeed();
+Random random;
+if (seed == null) {
+  random = new Random();
+} else {
+  random = new Random(seed.hashCode());
+}
+return random;
+  }
+}
{code}

Please, don't. The Random instance to be used should be acquired from 
{{RandomizedContext}} and this context is initialized based on the system 
property (and much earlier before the tests or Lucene code is first entered). 
There are things that depend on the seed (such as the order of tests) that this 
property controls.

A better patch would be to replace surefire with the runner from 
{{randomizedtesting}} package; this would make the tests consistent with ANT.

> Unit tests using StringHelper fail with ExceptionInInitializerError for maven 
> surefire >= 2.18
> --
>
> Key: LUCENE-8146
> URL: https://issues.apache.org/jira/browse/LUCENE-8146
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.2.1
>Reporter: Julien Massenet
>Priority: Minor
> Attachments: LUCENE-8146-seed_issue.tar.gz, LUCENE-8146_v1.patch
>
>
> This happens when multiple conditions are met:
>  * The client code is built with Maven
>  * To execute its unit tests, the client code relies on the 
> {{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
> version)
>  * The client code uses the {{org.apache.lucene.util.StringHelper}} class 
> (even transitively)
>  * The client is configured as with the standard Lucene maven build (i.e. it 
> is possible to fix the test seed using the {{tests.seed}} property)
> There was a change in Surefire's behavior starting with 2.18: when a property 
> is empty, instead of not sending it to the test runner, it will be sent with 
> an empty value.
> This behavior can be observed with the attached sample project:
>  * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
>  * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to 
> a real value
>  * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire 
> version is lower than 2.18
> Attached is a patch (built against \{{branch_7x}}) that centralizes accesses 
> to the {{tests.seed}} system property; it also makes sure that if it is 
> empty, it is treated as absent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8146) Unit tests using StringHelper fail with ExceptionInInitializerError for maven surefire >= 2.18

2018-01-31 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347225#comment-16347225
 ] 

Robert Muir commented on LUCENE-8146:
-

Well I'd probably feel different if it was more minimal. This patch adds new 
public classes and abstractions... why are those needed?

Isn't all that is needed a change from:
{code}
String prop = System.getProperty("tests.seed");
if (prop != null) {
{code}

to

{code}
String prop = System.getProperty("tests.seed", "");
if (!prop.isEmpty()) {
{code}

> Unit tests using StringHelper fail with ExceptionInInitializerError for maven 
> surefire >= 2.18
> --
>
> Key: LUCENE-8146
> URL: https://issues.apache.org/jira/browse/LUCENE-8146
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.2.1
>Reporter: Julien Massenet
>Priority: Minor
> Attachments: LUCENE-8146-seed_issue.tar.gz, LUCENE-8146_v1.patch
>
>
> This happens when multiple conditions are met:
>  * The client code is built with Maven
>  * To execute its unit tests, the client code relies on the 
> {{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
> version)
>  * The client code uses the {{org.apache.lucene.util.StringHelper}} class 
> (even transitively)
>  * The client is configured as with the standard Lucene maven build (i.e. it 
> is possible to fix the test seed using the {{tests.seed}} property)
> There was a change in Surefire's behavior starting with 2.18: when a property 
> is empty, instead of not sending it to the test runner, it will be sent with 
> an empty value.
> This behavior can be observed with the attached sample project:
>  * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
>  * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to 
> a real value
>  * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire 
> version is lower than 2.18
> Attached is a patch (built against \{{branch_7x}}) that centralizes accesses 
> to the {{tests.seed}} system property; it also makes sure that if it is 
> empty, it is treated as absent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8146) Unit tests using StringHelper fail with ExceptionInInitializerError for maven surefire >= 2.18

2018-01-31 Thread Julien MASSENET (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347129#comment-16347129
 ] 

Julien MASSENET commented on LUCENE-8146:
-

The patch itself is minimal, but I'll understand if the issue is just closed as 
is - you're right that it's not a Lucene issue. On the other hand, the Lucene 
maven build would also be impacted if the surefire version is ever bumped up, 
as it will break with the same error.

I'm just throwing that out there since I spent about 30 minutes figuring out 
why my build was failing after upgrading all my dependencies (that including 
Lucene and other the surefire plugin). Even if the ticket is closed, it might 
help the next user encountering the same problem.

> Unit tests using StringHelper fail with ExceptionInInitializerError for maven 
> surefire >= 2.18
> --
>
> Key: LUCENE-8146
> URL: https://issues.apache.org/jira/browse/LUCENE-8146
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.2.1
>Reporter: Julien MASSENET
>Priority: Minor
> Attachments: LUCENE-8146-seed_issue.tar.gz, LUCENE-8146_v1.patch
>
>
> This happens when multiple conditions are met:
>  * The client code is built with Maven
>  * To execute its unit tests, the client code relies on the 
> {{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
> version)
>  * The client code uses the {{org.apache.lucene.util.StringHelper}} class 
> (even transitively)
>  * The client is configured as with the standard Lucene maven build (i.e. it 
> is possible to fix the test seed using the {{tests.seed}} property)
> There was a change in Surefire's behavior starting with 2.18: when a property 
> is empty, instead of not sending it to the test runner, it will be sent with 
> an empty value.
> This behavior can be observed with the attached sample project:
>  * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
>  * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to 
> a real value
>  * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire 
> version is lower than 2.18
> Attached is a patch (built against \{{branch_7x}}) that centralizes accesses 
> to the {{tests.seed}} system property; it also makes sure that if it is 
> empty, it is treated as absent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11924) Add the ability to watch collection set changes in ZkStateReader

2018-01-31 Thread Houston Putman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman updated SOLR-11924:
--
Fix Version/s: 7.3
   master (8.0)

> Add the ability to watch collection set changes in ZkStateReader
> 
>
> Key: SOLR-11924
> URL: https://issues.apache.org/jira/browse/SOLR-11924
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (8.0), 7.3
>Reporter: Houston Putman
>Priority: Minor
> Fix For: master (8.0), 7.3
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Allow users to watch when the set of collections for a cluster is changed. 
> This is useful if a user is trying to discover collections within a cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3475) ShingleFilter should handle positionIncrement of zero, e.g. synonyms

2018-01-31 Thread Mayya Sharipova (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347069#comment-16347069
 ] 

Mayya Sharipova commented on LUCENE-3475:
-

[~jpountz] thank so much for suggestions. 

[~romseygeek] is going to work on this issue. I will study his solution.

> ShingleFilter should handle positionIncrement of zero, e.g. synonyms
> 
>
> Key: LUCENE-3475
> URL: https://issues.apache.org/jira/browse/LUCENE-3475
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Affects Versions: 3.4
>Reporter: Cameron
>Priority: Minor
>  Labels: newdev
>
> ShingleFilter is creating shingles for a single term that has been expanded 
> by synonyms when it shouldn't. The position increment is 0.
> As an example, I have an Analyzer with a SynonymFilter followed by a 
> ShingleFilter. Assuming car and auto are synonyms, the SynonymFilter produces 
> two tokens and position 1: car, auto. The ShingleFilter is then producing 3 
> tokens, when there should only be two: car, car auto, auto. This behavior 
> seems incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8146) Unit tests using StringHelper fail with ExceptionInInitializerError for maven surefire >= 2.18

2018-01-31 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347065#comment-16347065
 ] 

Robert Muir commented on LUCENE-8146:
-

{quote}
There was a change in Surefire's behavior starting with 2.18: when a property 
is empty, instead of not sending it to the test runner, it will be sent with an 
empty value.
{quote}

This seems like the bug that needs to be fixed. I don't think lucene should 
suffer the leniency/complexity because of it.

> Unit tests using StringHelper fail with ExceptionInInitializerError for maven 
> surefire >= 2.18
> --
>
> Key: LUCENE-8146
> URL: https://issues.apache.org/jira/browse/LUCENE-8146
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.2.1
>Reporter: Julien MASSENET
>Priority: Minor
> Attachments: LUCENE-8146-seed_issue.tar.gz, LUCENE-8146_v1.patch
>
>
> This happens when multiple conditions are met:
>  * The client code is built with Maven
>  * To execute its unit tests, the client code relies on the 
> {{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
> version)
>  * The client code uses the {{org.apache.lucene.util.StringHelper}} class 
> (even transitively)
>  * The client is configured as with the standard Lucene maven build (i.e. it 
> is possible to fix the test seed using the {{tests.seed}} property)
> There was a change in Surefire's behavior starting with 2.18: when a property 
> is empty, instead of not sending it to the test runner, it will be sent with 
> an empty value.
> This behavior can be observed with the attached sample project:
>  * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
>  * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to 
> a real value
>  * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire 
> version is lower than 2.18
> Attached is a patch (built against \{{branch_7x}}) that centralizes accesses 
> to the {{tests.seed}} system property; it also makes sure that if it is 
> empty, it is treated as absent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8145) UnifiedHighlighter should use single OffsetEnum rather than List

2018-01-31 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347062#comment-16347062
 ] 

Alan Woodward commented on LUCENE-8145:
---

I should add that this is just a refactoring, and in particular passage scores 
are not changed.

> UnifiedHighlighter should use single OffsetEnum rather than List
> 
>
> Key: LUCENE-8145
> URL: https://issues.apache.org/jira/browse/LUCENE-8145
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-8145.patch
>
>
> The UnifiedHighlighter deals with several different aspects of highlighting: 
> finding highlight offsets, breaking content up into snippets, and passage 
> scoring.  It would be nice to split this up so that consumers can use them 
> separately.
> As a first step, I'd like to change the API of FieldOffsetStrategy to return 
> a single unified OffsetsEnum, rather than a collection of them.  This will 
> make it easier to expose the OffsetsEnum of a document directly from the 
> highlighter, bypassing snippet extraction and scoring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8145) UnifiedHighlighter should use single OffsetEnum rather than List

2018-01-31 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347059#comment-16347059
 ] 

Alan Woodward commented on LUCENE-8145:
---

This patch renames `FieldOffsetStrategy#getOffsetsEnums()` to 
`FieldOffsetStrategy#getOffsetsEnum`, and changes the return value from 
`List` to `OffsetsEnum` directly.

FieldHighlighter is simplified a bit, particularly in terms of handling 
OffsetsEnum as a closeable resource.  Scoring is delegated to the Passage 
itself, which now keeps track of the within-passage frequencies of its 
highlighted terms and phrases.  A new MultiOffsetsEnum class deals with 
combining multiple OffsetsEnums using a priority queue.  Because all offsets 
are iterated in order, Passage no longer needs to worry about sorting its 
internal hits.

The APIs for FieldOffsetStrategy, Passage and OffsetEnum have all changed 
slightly, but they're all pretty expert so I think this could be targeted at 
7.3?

cc [~dsmiley] [~jimczi]

> UnifiedHighlighter should use single OffsetEnum rather than List
> 
>
> Key: LUCENE-8145
> URL: https://issues.apache.org/jira/browse/LUCENE-8145
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/highlighter
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-8145.patch
>
>
> The UnifiedHighlighter deals with several different aspects of highlighting: 
> finding highlight offsets, breaking content up into snippets, and passage 
> scoring.  It would be nice to split this up so that consumers can use them 
> separately.
> As a first step, I'd like to change the API of FieldOffsetStrategy to return 
> a single unified OffsetsEnum, rather than a collection of them.  This will 
> make it easier to expose the OffsetsEnum of a document directly from the 
> highlighter, bypassing snippet extraction and scoring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8146) Unit tests using StringHelper fail with ExceptionInInitializerError for maven surefire >= 2.18

2018-01-31 Thread Julien MASSENET (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien MASSENET updated LUCENE-8146:

Description: 
This happens when multiple conditions are met:
 * The client code is built with Maven
 * To execute its unit tests, the client code relies on the 
{{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
version)
 * The client code uses the {{org.apache.lucene.util.StringHelper}} class (even 
transitively)
 * The client is configured as with the standard Lucene maven build (i.e. it is 
possible to fix the test seed using the {{tests.seed}} property)

There was a change in Surefire's behavior starting with 2.18: when a property 
is empty, instead of not sending it to the test runner, it will be sent with an 
empty value.

This behavior can be observed with the attached sample project:
 * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
 * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to a 
real value
 * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire version 
is lower than 2.18

Attached is a patch (built against \{{branch_7x}}) that centralizes accesses to 
the {{tests.seed}} system property; it also makes sure that if it is empty, it 
is treated as absent.

  was:
This happens when multiple conditions are met:
 * The client code is built with Maven
 * To execute its unit tests, the client code relies on the 
{{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
version)
 * The client code uses the {{org.apache.lucene.util.StringHelper}} class (even 
transitively)
 * The client is configured as with the standard Lucene maven build (i.e. it is 
possible to fix the test seed using the {{tests.seed}} property)

There was a change in Surefire's behavior starting with 2.18: when a property 
is empty, instead of not sending it to the test runner, it will be sent with an 
empty value.

This behavior can be observed with the attached sample project:
 * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
 * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to a 
real value
 * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire version 
is lower than 2.18

Attached is a patch that centralizes accesses to the {{tests.seed}} system 
property; it also makes sure that if it is empty, it is treated as absent.


> Unit tests using StringHelper fail with ExceptionInInitializerError for maven 
> surefire >= 2.18
> --
>
> Key: LUCENE-8146
> URL: https://issues.apache.org/jira/browse/LUCENE-8146
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.2.1
>Reporter: Julien MASSENET
>Priority: Minor
> Attachments: LUCENE-8146-seed_issue.tar.gz, LUCENE-8146_v1.patch
>
>
> This happens when multiple conditions are met:
>  * The client code is built with Maven
>  * To execute its unit tests, the client code relies on the 
> {{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
> version)
>  * The client code uses the {{org.apache.lucene.util.StringHelper}} class 
> (even transitively)
>  * The client is configured as with the standard Lucene maven build (i.e. it 
> is possible to fix the test seed using the {{tests.seed}} property)
> There was a change in Surefire's behavior starting with 2.18: when a property 
> is empty, instead of not sending it to the test runner, it will be sent with 
> an empty value.
> This behavior can be observed with the attached sample project:
>  * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
>  * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to 
> a real value
>  * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire 
> version is lower than 2.18
> Attached is a patch (built against \{{branch_7x}}) that centralizes accesses 
> to the {{tests.seed}} system property; it also makes sure that if it is 
> empty, it is treated as absent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8146) Unit tests using StringHelper fail with ExceptionInInitializerError for maven surefire >= 2.18

2018-01-31 Thread Julien MASSENET (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien MASSENET updated LUCENE-8146:

Attachment: LUCENE-8146_v1.patch

> Unit tests using StringHelper fail with ExceptionInInitializerError for maven 
> surefire >= 2.18
> --
>
> Key: LUCENE-8146
> URL: https://issues.apache.org/jira/browse/LUCENE-8146
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.2.1
>Reporter: Julien MASSENET
>Priority: Minor
> Attachments: LUCENE-8146-seed_issue.tar.gz, LUCENE-8146_v1.patch
>
>
> This happens when multiple conditions are met:
>  * The client code is built with Maven
>  * To execute its unit tests, the client code relies on the 
> {{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
> version)
>  * The client code uses the {{org.apache.lucene.util.StringHelper}} class 
> (even transitively)
>  * The client is configured as with the standard Lucene maven build (i.e. it 
> is possible to fix the test seed using the {{tests.seed}} property)
> There was a change in Surefire's behavior starting with 2.18: when a property 
> is empty, instead of not sending it to the test runner, it will be sent with 
> an empty value.
> This behavior can be observed with the attached sample project:
>  * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
>  * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to 
> a real value
>  * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire 
> version is lower than 2.18
> Attached is a patch that centralizes accesses to the {{tests.seed}} system 
> property; it also makes sure that if it is empty, it is treated as absent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8146) Unit tests using StringHelper fail with ExceptionInInitializerError for maven surefire >= 2.18

2018-01-31 Thread Julien MASSENET (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien MASSENET updated LUCENE-8146:

Attachment: LUCENE-8146-seed_issue.tar.gz

> Unit tests using StringHelper fail with ExceptionInInitializerError for maven 
> surefire >= 2.18
> --
>
> Key: LUCENE-8146
> URL: https://issues.apache.org/jira/browse/LUCENE-8146
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.2.1
>Reporter: Julien MASSENET
>Priority: Minor
> Attachments: LUCENE-8146-seed_issue.tar.gz
>
>
> This happens when multiple conditions are met:
>  * The client code is built with Maven
>  * To execute its unit tests, the client code relies on the 
> {{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
> version)
>  * The client code uses the {{org.apache.lucene.util.StringHelper}} class 
> (even transitively)
>  * The client is configured as with the standard Lucene maven build (i.e. it 
> is possible to fix the test seed using the {{tests.seed}} property)
> There was a change in Surefire's behavior starting with 2.18: when a property 
> is empty, instead of not sending it to the test runner, it will be sent with 
> an empty value.
> This behavior can be observed with the attached sample project:
>  * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
>  * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to 
> a real value
>  * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire 
> version is lower than 2.18
> Attached is a patch that centralizes accesses to the {{tests.seed}} system 
> property; it also makes sure that if it is empty, it is treated as absent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8146) Unit tests using StringHelper fail with ExceptionInInitializerError for maven surefire >= 2.18

2018-01-31 Thread Julien MASSENET (JIRA)
Julien MASSENET created LUCENE-8146:
---

 Summary: Unit tests using StringHelper fail with 
ExceptionInInitializerError for maven surefire >= 2.18
 Key: LUCENE-8146
 URL: https://issues.apache.org/jira/browse/LUCENE-8146
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 7.2.1
Reporter: Julien MASSENET


This happens when multiple conditions are met:
 * The client code is built with Maven
 * To execute its unit tests, the client code relies on the 
{{maven-surefire-plugin}}, with a version greater than 2.17 (last working 
version)
 * The client code uses the {{org.apache.lucene.util.StringHelper}} class (even 
transitively)
 * The client is configured as with the standard Lucene maven build (i.e. it is 
possible to fix the test seed using the {{tests.seed}} property)

There was a change in Surefire's behavior starting with 2.18: when a property 
is empty, instead of not sending it to the test runner, it will be sent with an 
empty value.

This behavior can be observed with the attached sample project:
 * {{mvn test}}: fails with a {{java.lang.ExceptionInInitializerError}}
 * {{mvn test -Dtests.seed=123456}}: succeeds because the property is set to a 
real value
 * {{mvn test -Dsurefire.version=2.17}}: succeeds because the surefire version 
is lower than 2.18

Attached is a patch that centralizes accesses to the {{tests.seed}} system 
property; it also makes sure that if it is empty, it is treated as absent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >